The discussion space of AI alignment is filled with perspectives and analyses arguing the difficulty of this quest. However, I am not entertaining an

AI Alignment: Why Solving It Is Impossible - by Dakara

submited by
Style Pass
2024-05-11 13:30:06

The discussion space of AI alignment is filled with perspectives and analyses arguing the difficulty of this quest. However, I am not entertaining an argument for the difficulty of alignment, but rather the impossibility of alignment: alignment is not a solvable problem.

The problem space described by alignment theory is not well-defined. As is much with the concept of powerful AI, we are left to deal with assumptions and abstractions as the basis for our reasoning.

However, out of abstract conceptualization, alignment theory requires us to arrive at a provable outcome, as the premise requires we must have certainty over infinite power.

The first major issue (#1) is a complete obstacle to having a verifiable solution in the present. The second set of issues (#2), do not contain any goals that would ever lead us to being able to satisfy (#1).

“Alignment, which we cannot define, will be solved by rules on which none of us agree, based on values that exist in conflict, for a future technology that we do not know how to build, which we could never fully understand, must be provably perfect to prevent unpredictable and untestable scenarios for failure, of a machine whose entire purpose is to outsmart all of us and think of all possibilities that we did not.”

Leave a Comment