A.I. might "liberate" most humans to do jobs that are less meaningful and entail limited agency. It'll be more fun than it sounds.
We keep waiting for A.I. to liberate us from mundane, repetitive tasks or just plain old busywork. Then, we tell ourselves, we'll finally be able to focus on what we do best — showing empathy, being creative, and solving problems.
What if it is software that's about to be liberated? We've subjected it to mundane, repetitive tasks while we focused on the things we did best. "Did" because there's no reason why software won't become better than us at precisely those things: Coming up with new ideas, showing empathy, and solving complex problems.
Of course, to operate in the world, software would require human feedback. But it won't be the type of feedback humans currently have in mind. A.I. will not turn to its supervisor and ask, "is this ok?"; it will not present its plans to a committee of humans and wait for everyone to provide feedback and agree on some watered-down version of the original ideas. Instead, it will process human input in the same way most software products already do: By tracking the behavior and interactions of humans with and within different types of contexts, environments, and designs. A.I. doesn't need to ask us what we think or what works; it can deduce it through experiments.
Computing resources are scarce, as are energy and space. Once software is better-suited for innovation and creativity, comparative advantage would demand that software focus on that and leave other work to everyone else. Everyone else means us: We'll be able to do the low-value work that software is too busy (or expensive) to worry about. Sure, we might still have some (cheap) software to help us. But the fantasy that humans will create and innovate while machines work seems questionable. It might happen, but it is quite probable that it won't.