This week, California’s legislature introduced SB 1047: The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. The bill, introduced by State Senator Scott Wiener (liked by many, myself included, for his pro-housing stance), would create a sweeping regulatory regime for AI, apply the precautionary principle to all AI development, and effectively outlaw all new open source AI models—possibly throughout the United States.
I didn’t intend to write a second post this week, but when I saw this, I knew I had to: I analyze state and local policy for a living (n.b.: nothing I write on this newsletter is on behalf of the Hoover Institution or Stanford University), and this is too much to pass up.
Hyperdimensional is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
A few caveats: I am not a lawyer, so I may err on legal nuances, and some things that seem ambiguous to me may in fact be clearer than I suspect. Also, an important (though not make-or-break) assumption of this piece is that open-source AI is a net positive for the world in terms of both innovation and safety ( see my article here).