Machine intelligence, part 1 - Sam Altman

submited by
Style Pass
2023-03-29 23:00:04

This is going to be a two-part post—one on why machine intelligence is something we should be afraid of, and one on what we should do about it.  If you’re already afraid of machine intelligence, you can skip this one and read the second post tomorrow—I was planning to only write part 2, but when I asked a few people to read drafts it became clear I needed part 1.

Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity.  There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could.  Also, most of these other big threats are already widely feared.

It is extremely hard to put a timeframe on when this will happen (more on this later), and it certainly feels to most people working in the field that it’s still many, many years away.  But it’s also extremely hard to believe that it isn’t very likely that it will happen at some point.

Leave a Comment