I thought I’d take a break from writing about weighty matters like trade wars and presidential elections, and talk about something more whimsical an

How I would regulate AI - by Noah Smith - Noahpinion

submited by
Style Pass
2024-05-13 20:00:16

I thought I’d take a break from writing about weighty matters like trade wars and presidential elections, and talk about something more whimsical and lighthearted — how to keep quasi-sentient computers from taking all our jobs and then hunting us to extinction with swarms of autonomous drones!

Well, OK, I don’t think we’re going to be facing off with Skynet anytime soon. But generative AI is undeniably a very powerful new technology, and the list of powerful new technologies that the human race hasn’t used for destructive purposes is very short indeed. So it probably makes sense to start thinking about how to use regulation to decrease the likelihood that AI will be used to cause catastrophes.

That’s a lot easier said than done, however. It’s very hard to predict what kind of regulation would make a technology safer before the harms materialize, and it’s very easy to create regulation that slows down technological progress. So a priori, a fairly likely outcome of AI regulation is that AI progress slows down, but AI still ends up causing harm in ways that the regulators never anticipated.

Realizing this basic difficulty, the Biden administration wisely taking a light touch — at least in the U.S. There was some speculation that Biden’s executive order on AI last October would focus on limiting AI capabilities. But instead, the order’s main protection against existential risk is simply a mandate for safety testing on foundational models (like ChatGPT and Gemini). It also has provisions to protect against the non-existential risks of AI — job displacement, deepfakes, erosion of privacy, and so on.

Leave a Comment