Some sci-fi scenarios depict robots as cold-hearted clankers eager to manipulate human stooges. But that’s not the only possible path for artificial

Can AI ‘feel’ guilt?

submited by
Style Pass
2025-07-30 21:00:14

Some sci-fi scenarios depict robots as cold-hearted clankers eager to manipulate human stooges. But that’s not the only possible path for artificial intelligence.

Humans have evolved emotions like anger, sadness and gratitude to help us think, interact and build mutual trust. Advanced AI could do the same. In populations of simple software agents (like characters in “The Sims” but much, much simpler), having “guilt” can be a stable strategy that benefits them and increases cooperation, researchers report July 30 in Journal of the Royal Society Interface.

Emotions are not just subjective feelings but bundles of cognitive biases, physiological responses and behavioral tendencies. When we harm someone, we often feel compelled to pay a penance, perhaps as a signal to others that we won’t offend again. This drive for self-punishment can be called guilt, and it’s how the researchers programmed it into their agents. The question was whether those that had it would be outcompeted by those that didn’t, say Theodor Cimpeanu, a computer scientist at the University of Stirling in Scotland, and colleagues.

The agents played a two-player game with their neighbors called iterated prisoner’s dilemma. The game has roots in game theory, a mathematical framework for analyzing multiple decision makers’ choices based on their preferences and individual strategies. On each turn, each player “cooperates” (plays nice) or “defects” (acts selfishly). In the short term, you win the most points by defecting, but that tends to make your partner start defecting, so everyone is better off cooperating in the long run. The AI agents couldn’t feel guilt as richly as humans do but experienced it as a self-imposed penalty that nudges them to cooperate after selfish behavior.

Leave a Comment
Related Posts