When Richard Sutton  introduced the bitter lesson for AI in 2019, he broke the myth that great human ingenuity is needed to create intelligent machine

The second bitter lesson - by Adam Elwood

submited by
Style Pass
2025-01-08 13:30:36

When Richard Sutton introduced the bitter lesson for AI in 2019, he broke the myth that great human ingenuity is needed to create intelligent machines. All it seems to take is a lot of computing power and algorithms that scale easily, rather than clever techniques designed with deep understanding. The two major classes of algorithms that fit this are search and learning; when they are scaled up, advanced AI systems naturally emerge. Sutton’s key insight can be summarised in his final paragraph:

[a] general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, […] instead we should build in only the meta-methods [search and learning] that can find and capture this arbitrary complexity. […] We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

A concrete example of this can be seen in the development of chess algorithms. Early chess programs, like those in the 1970s and 1980s, relied heavily on human-crafted heuristics — rules and strategies devised by experts to mimic human understanding of the game. These systems could play decently but were limited by the ingenuity and foresight of their human designers. In contrast, modern chess engines like AlphaZero, developed by DeepMind, only rely on search and learning.

Leave a Comment