When OpenAI released GPT-4 in March 2023, its surprising capabilities triggered a groundswell of support for AI safety regulation. Dozens of prominent

Six principles for thinking about AI risk

submited by
Style Pass
2024-10-24 15:30:06

When OpenAI released GPT-4 in March 2023, its surprising capabilities triggered a groundswell of support for AI safety regulation. Dozens of prominent scientists and business leaders signed a statement calling for a six-month pause on AI development. When OpenAI CEO Sam Altman called for a new government agency to license AI models at a Congressional hearing in April 2023, both Democratic and Republican senators seemed to take the idea seriously.

It took longer for skeptics of existential risk to find their footing. This might be because few people outside the tight-knit AI safety community were paying attention to the issue prior to the release of ChatGPT. But in recent months, the intellectual climate has changed significantly, with skeptical arguments gaining more traction.

Last month a pair of Princeton computer scientists published a new book that includes my favorite case for skepticism about existential risks from AI. In AI Snake Oil, Arvind Narayanan and Sayash Kapoor write about AI capabilities in a wide range of settings, from criminal sentencing to moderating social media. My favorite part of the book is Chapter 5, which takes the arguments of AI doomers head-on.

Leave a Comment