Choice engines and paternalistic AI

submited by
Style Pass
2024-07-07 07:00:05

Humanities and Social Sciences Communications volume  11, Article number: 888 (2024 ) Cite this article

Many consumers suffer from inadequate information and behavioral biases, which can produce internalities, understood as costs that people impose on their future selves. In these circumstances, “Choice Engines,” powered by Artificial Intelligence (AI), might produce significant savings in terms of money, health, safety, or time. Different consumers care about different things, of course, which is a reason to insist on a high degree of freedom of choice, and a high degree of personalization. Nonetheless, it is important to emphasize that Choice Engines and AI might be enlisted by self-interested actors, who might exploit inadequate information or behavioral biases, and thus reduce consumer welfare. It is also important to emphasize that Choice Engines and AI might show behavioral biases, perhaps the same ones that human beings are known to show, perhaps others that have not been named yet, or perhaps new ones, not shown by human beings, that cannot be anticipated.

Can artificial intelligence increase social welfare by improving people’s choices? Can it address the problem of heterogeneity? Might self-interested designers or users exploit behavioral biases, increase manipulation, and thus reduce social welfare? The answer to all of these questions is “yes”—which raises serious issues for regulators, and serious empirical challenges for researchers.

Leave a Comment