In the first four parts of this Communications Historical Reflections column series, I have followed the artificial intelligence (AI) brand from its debut in the 1950s through to the reorientation of the field around probabilistic approaches and big data during the AI winter that ran through the 1990s and early 2000s.
Aside from the brief flourishing of an expert system industry in the 1980s, the main theme of that long history was disappointment. AI-branded technologies that impressed when applied to toy lab problems failed to scale up for practical application. Test problems such as chess were eventually mastered, but only with techniques that had little relevance to other tasks or plausible connection to human cognition. Cyc, the most ambitious project of the 1980s, served mostly to highlight the limitations of symbolic AI. Even IBM lost billions when it tried to turn Watson’s 2011 triumph on Jeopardy! into the foundation of a healthcare services business.
During the 2010s, in sharp contrast, the machine learning community accumulated a collection of flexible tools that exceeded expectations in one application after another. Suddenly the AI surprises were coming on the upside: Who knew that neural networks could write poetry or turn prompts into photographs? DeepMind, a British company acquired by Google in 2014, generated a series of headlines. It created the first computer system able to play the board game Go at the highest levels, a much greater computational challenge than chess. DeepMind then applied itself to protein folding, suggesting that deep learning might be poised to transform scientific research. In 2024 that work was honored with two Nobel prizes. Another DeepMind system figured out winning strategies against a range of Atari VCS games from the early days of home videogaming. Because games provided an automatically measured score, they were well suited for the development of unsupervised learning algorithms that did not require humans to manually categorize thousands of examples of test data.