For a few years now, lots of people have been wondering what Sam Altman thinks about the future — or perhaps what he knows about it as the CEO of Op

What Ever Happened to the AI Apocalypse?

submited by
Style Pass
2024-06-06 22:00:04

For a few years now, lots of people have been wondering what Sam Altman thinks about the future — or perhaps what he knows about it as the CEO of OpenAI, the company that kicked off the recent AI boom. He’s been happy to tell them about the end of the world. “If this technology goes wrong, it can go quite wrong,” he told a Senate committee in May 2023. “What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT,” he said last June. “A misaligned superintelligent AGI could cause grievous harm to the world,” he wrote in a blog post on OpenAI’s website that year.

Before the success of ChatGPT thrust him into the spotlight, he was even less circumspect. “AI will probably, like, most likely lead to the end of the world, but in the meantime, there’ll be great companies,” he cracked during an interview in 2015. “Probably AI will kill us all,” he joked at an event in New Zealand around the same time; soon thereafter, he would tell a New Yorker reporter about his plans to flee there with friend Peter Thiel in the event of an apocalyptic event (either there or “a big patch of land in Big Sur” he could fly to). Then Altman wrote on his personal blog that “superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.” Returning, again, to last year: “The bad case — and I think this is important to say — is like lights out for all of us.” He wasn’t alone in expressing such sentiments. In his capacity as CEO of OpenAI, he signed his name to a group statement arguing that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” alongside a range of people in — and interested in — AI, including notable figures at Google, OpenAI, Microsoft, and xAI.

The tech industry’s next big thing might be a doomsday machine, according to the tech industry, and the race is on to summon a technology that might end the world. It’s a strange mixed message, to say the least, but it’s hard to overstate how thoroughly the apocalypse — invoked as a serious worry or a reflexive aside — has permeated the mainstream discourse around AI. Unorthodox thinkers and philosophers have seen longstanding theories and concerns about superintelligence get mainstream consideration. But the end of the world has also become product-event material, fundraising fodder. In discussions about artificial intelligence, acknowledging the outside chance of ending human civilization has come to resemble a tic. On AI-startup websites, the prospect of human annihilation appears as boilerplate.

Leave a Comment