Gov. Gavin Newsom on Sunday vetoed SB 1047, an artificial intelligence safety bill that would have established requirements for developers of advanced AI models to create protocols aimed at preventing catastrophes.
The bill, introduced by Sen. Scott Wiener (D-San Francisco), would have required developers to submit their safety plans to the state attorney general, who could hold them liable if AI models they directly control were to cause harm or imminent threats to public safety.
Additionally, the legislation would have required tech firms to be able to turn off the AI models they directly control if things went awry.
In his veto message, Newsom said the legislation could give the public a “false sense of security about controlling this fast-moving technology” because it targeted only large-scale and expensive AI models and not smaller, specialized systems.
“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom’s veto message stated. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”