By Alex Heath , a deputy editor and author of the Command Line newsletter. He has over a decade of experience covering the tech industry.
That was the prevailing theme from this week’s Cerebral Valley AI Summit in San Francisco — a gathering of about 350 CEOs, engineers, and investors in the AI industry that I attended on Wednesday.
Until now, the AI hype cycle has been predicated on the theory that throwing more data and compute at training new AI models will result in exponentially better results. But as I first reported in this newsletter, Google and others are starting to see diminishing returns from training their next models. This proverbial “wall” challenges the assumption that the next crop of major AI models will be dramatically smarter than what exists today.
This story is exclusively for subscribers of Command Line, our newsletter about the tech industry’s inside conversation. Subscribe to a plan below for full access.