In my most recent commentary on SB 1047, and in several Twitter interactions, I have argued that one of the biggest marks against the bill is that it is simply premature. It would be better, I believe, to wait until the potential for catastrophic risks from AI models is more clearly demonstrated. If that happens—and to be clear, I won’t necessarily be shocked if this happens soon—we will have the opportunity to craft a bill that is both grounded in empirical evidence and, at least conceivably, more precisely tailored to the kinds of risks that seem the likeliest (rather than stabbing in the dark, as we currently are).
If evidence emerges that future models are indeed dangerous, that evidence is likely to drive the federal government toward action. The federal government is better suited than California’s to handle frontier AI regulation, so in addition to evidence leading to a better bill, it could also lead to a bill from the appropriate jurisdiction for such things.
More broadly, SB 1047 sets a precedent of regulating emerging technologies based on almost entirely speculative risks. Given the number of promising technologies I expect to emerge in the coming decade, I do not think it would be wise to set this precedent.