As a temporary measure, using compute thresholds to pick out the AGI labs for safety-testing and disclosures is as light-touch and well-targeted as it

Ninety-five theses on AI - by Samuel Hammond - Second Best

submited by
Style Pass
2024-05-07 21:30:06

As a temporary measure, using compute thresholds to pick out the AGI labs for safety-testing and disclosures is as light-touch and well-targeted as it gets.

The dogma that we should only regulate technologies based on “use” or “risk” may sound more market-friendly, but often results in a far broader regulatory scope than technology-specific approaches (see: the EU AI Act).

The use of the Defense Production Act to require disclosures from frontier labs is appropriate given the unique affordances available to the Department of Defense, and the bona fide national security risks associated with sufficiently advanced forms of AI.

You can question the nearness of AGI / superintelligence / other “dual use” capabilities and still see the invocation of the DPA as prudent for the option value it provides under conditions of fundamental uncertainty.

Requiring safety testing and disclosures for the outputs of $100 million-plus training runs is not an example of regulatory capture nor a meaningful barrier to entry relative to the cost of compute.

Leave a Comment