Speaking in a

Elon Musk says there could be a 20% chance AI destroys humanity — but we should do it anyway

submited by
Style Pass
2024-04-01 13:30:02

Speaking in a "Great AI Debate" seminar at the four-day Abundance Summit earlier this month, Musk recalculated his previous risk assessment on the technology, saying, "I think there's some chance that it will end humanity. I probably agree with Geoff Hinton that it's about 10% or 20% or something like that."

Roman Yampolskiy, an AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, told Business Insider that Musk is right in saying that AI could be an existential risk for humanity, but "if anything, he is a bit too conservative" in his assessment.

"Actual p(doom) is much higher in my opinion," Yamploskiy said, referring to the "probability of doom" or the likelihood that AI takes control of humankind or causes a humanity-ending event, such as creating a novel biological weapon or causing the collapse of society due to a large-scale cyber attack or nuclear war.

The New York Times called (p)doom "the morbid new statistic that is sweeping Silicon Valley," with various tech executives cited by the outlet as having estimates ranging from 5 to 50% chance of an AI-driven apocalypse. Yamploskiy places the risk "at 99.999999%."

Leave a Comment