Note: this short story is an attempt to respond to this comment. Specifically, this story is an attempt to steelman the claim that super-intelligent AI is "aligned by definition", if all that we care about is that the AI is "interesting", not that it respects human values. I do not personally advocate anyone making a paperclip maximizer.
The Alignment Problem had at last been solved. Thanks to advances in Eliciting Latent Knowledge, explaining human values to an AI was as simple as typing:
As a result, a thousand flowers of human happiness and creativity had bloomed throughout the solar system. Poverty, disease and death had all been eradicated, thanks to the benevolent efforts of Democretus, the super-intelligent AI that governed the human race.
Democretus--or D, as everyone called the AI--was no dictator, however. Freedom was one of the values that humans prized most highly of all, and D was programmed to respect that. Not only were humans free to disobey D's commands--even when it would cause them harm--there was even a kill-switch built into D's programming. If D ever discovered that 51% of humans did not wish for it to rule anymore, it would shut down in a way designed to cause as little disruption as possible.