He asked other game theorists to send in their best strategies in the form of “bots”, short pieces of code that took an opponent’s actions as input and returned one of the classic Prisoner’s Dilemma outputs of COOPERATE or DEFECT. For example, you might have a bot that COOPERATES a random 80% of the time, but DEFECTS against another bot that plays DEFECT more than 20% of the time, except on the last round, where it always DEFECTS, or if its opponent plays DEFECT in response to COOPERATE.
In the “tournament”, each bot “encountered” other bots at random for a hundred rounds of Prisoners’ Dilemma; after all the bots had finished their matches, the strategy with the highest total utility won.
This was so boring that Axelrod sponsored a second tournament specifically for strategies that could displace TIT-FOR-TAT. When the dust cleared, TIT-FOR-TAT still won - although some strategies could beat it in head-to-head matches, they did worst against each other, and when all the points were added up TIT-FOR-TAT remained on top.
In certain situations, this strategy is dominated by a slight variant, TIT-FOR-TAT-WITH-FORGIVENESS. That is, in situations where a bot can “make mistakes” (eg “my finger slipped”), two copies of TIT-FOR-TAT can get stuck in an eternal DEFECT-DEFECT equilibrium against each other; the forgiveness-enabled version will try cooperating again after a while to see if its opponent follows. Otherwise, it’s still state-of-the-art.