ChatGPT and other large language models (LLMs) can spew forth essays and short stories by the bushel-load. How come none of them are of any real interest? OpenAI’s new “o1” model outscores PhDs on a test of expertise in chemistry, physics and biology. Why isn’t it generating novel scientific insights?
A popular explanation is that “AI can't create anything new.” The argument is that because these systems rely on human training data they are limited to generating remixes of existing work. But that’s an incorrect analysis. The actual impediments to AI creativity lie elsewhere. Understanding those impediments will help shed light on the prospects for those impediments to be removed, and allow us to evaluate the significance of advances like OpenAI’s latest model, o1.
The idea that AI can’t create anything “new” does not stand up to scrutiny. Consider this snippet from The Policeman’s Beard is Half Constructed, a book written by a computer and published way back in 1984: