According to inside reports, Orion (codename for the attempted GPT-5 release from OpenAI) is not significantly smarter than the existing GPT-4. Which likely means AI progress on baseline intelligence is plateauing. Here’s TechCrunch summarizing:
Employees who tested the new model, code-named Orion, reportedly found that even though its performance exceeds OpenAI’s existing models, there was less improvement than they’d seen in the jump from GPT-3 to GPT-4.
If this were just a few hedged anonymous reports about “less improvement,” I honestly wouldn’t give it too much credence. But traditional funders and boosters like Marc Andreessen are also saying the models are reaching a “ceiling,” and now one of the great proponents of the scaling hypothesis (the idea that AI capabilities scale with how big they are and the amount of data they’re fed) is agreeing. Ilya Sutskever was always the quiet scientific brains behind OpenAI, not Sam Altman, so what he recently told Reuters should be given significant weight:
Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training—the phase of training an AI model that uses a vast amount of unlabeled data to understand language patterns and structures—have plateaued.