AI bounties revisited

submited by
Style Pass
2023-03-24 10:30:13

The recent news of ChatGPT plugins has me waxing nostalgic about the days when I lived in the United States, studying electrical engineering and equal parts furious and depressed that AGI might well destory the world before I ever get myself properly settled in it. Ah, the glory days…

Well, I’m doing a lot better now, both psychologically and emotionally. Having a beautiful wife and a full time job does wonders for the soul! But I thought I would repost this tiny gem of a schizopost, from way back in my postrat Twitter days. I no longer believe a word of it, but hopefully it provides you some fuel for thought. Or at least some mild entertainment that the young can be so foolish!

Alignment is hard, maybe impossible. Implementing alignment is at least as hard, and might be much harder. Perhaps there is the option to just not build an AGI, safe or unsafe, in the first place.

For one person, this is easy: Pick a different career. For a small group of people, this is harder: Some people might want to build an AI despite the risks. The reasons why often touch on their stances regarding deep philosophical issues, like where qualia comes from. You won't convince these people to see it your way, although you may well convince them your caution is justified. There’s no getting around it: You need to employ some kind of structural violence to stop them.

Leave a Comment