Superintelligence should scare us only insofar as it grants superpowers. Protecting against specific harms of specific plausible powers may be our best strategy for preventing catastrophes.
The AI risks literature generally takes for granted that superintelligence will produce superpowers. It rarely examines how or why specific powers might develop. In fact, it’s common to deny that an explanation is either possible or necessary.
The argument is that we are more intelligent than chimpanzees, which is why we are more powerful, in ways chimpanzees cannot begin to imagine. Then, the reasoning goes, something more intelligent than us would be unimaginably more powerful again. In that case, we can’t know how a superintelligent AI would gain inconceivable power, but we can be confident that it would.
However, for hundreds of thousands of years humans were not more powerful than chimpanzees. Significantly empowering technologies only began to accumulate a few thousand years ago, apparently due to cultural evolution rather than increases in innate intelligence. The more dramatic increases in human power beginning with the industrial revolution were almost certainly not due to increases in innate intelligence. What role intelligence plays in science and technology development is mainly unknown; I’ll return to this point later.