The technological singularity — the point at which artificial general intelligence surpasses human intelligence — is coming. But will it usher in humanity's salvation, or lead to its downfall?
In 2024, Scottish futurist David Wood was part of an informal roundtable discussion at an artificial intelligence (AI) conference in Panama, when the conversation veered to how we can avoid the most disastrous AI futures. His sarcastic answer was far from reassuring.
First, we would need to amass the entire body of AI research ever published, from Alan Turing's 1950 seminal research paper to the latest preprint studies. Then, he continued, we would need to burn this entire body of work to the ground. To be extra careful, we would need to round up every living AI scientist — and shoot them dead. Only then, Wood said, can we guarantee that we sidestep the "non-zero chance" of disastrous outcomes ushered in with the technological singularity — the "event horizon" moment when AI develops general intelligence that surpasses human intelligence.
Wood, who is himself a researcher in the field, was obviously joking about this "solution" to mitigating the risks of artificial general intelligence (AGI). But buried in his sardonic response was a kernel of truth: The risks a superintelligent AI poses are terrifying to many people because they seem unavoidable. Most scientists predict that AGI will be achieved by 2040 — but some believe it may happen as soon as next year.