This is not the first time humanity has stared down the possibility of extinction due to its technological creations. But the threat of AI is very different from the nuclear weapons we've learned to live with. Nukes can't think. They can't lie, deceive or manipulate. They can't plan and execute. Somebody has to push the big red button.
The shocking emergence of general-purpose AI, even at the slow, buggy level of GPT-4, has forced the genuine risk of extermination back into the conversation.
Let's be clear from the outset: if we agree that artificial superintelligence has a chance of wiping out all life on Earth, there doesn't seem to be much we can do about it anyway. It's not just that we don't know how to stop something smarter than us. We can't even, as a species, stop ourselves from racing to create it. Who's going to make the laws? The US Congress? The United Nations? This is a global issue. Desperate open letters from industry leaders asking for a six-month pause to figure out where we're at may be about the best we can do.
Six months, just give me six months bro, I'll align this. I'll align the hell out of this. Just six months bro. I promise you. It'll be crazy. Just six months. Bro, I'm telling you, I have a plan. I have it all mapped out. I just need six months bro, and it'll be done. Can you i-