An important question about AI is whether—and how quickly—transformer-based foundation models will achieve human-like reasoning abilities. Some pe

LLMs are much worse than humans at learning from experience

submited by
Style Pass
2024-12-23 15:00:03

An important question about AI is whether—and how quickly—transformer-based foundation models will achieve human-like reasoning abilities. Some people believe we can get there simply by scaling up conventional LLMs. Others believe we’ll need to augment LLMs with the ability to search through possible solutions to difficult problems—as OpenAI did with o1.

I was fairly impressed with o1, but I still suspect something more fundamental is missing. I predict that it will take at least one—and possibly several—transformer-sized breakthroughs to get AI models to reason like humans.

In a series of posts this week, I’ll explore the limitations of today’s LLMs and recent efforts to address those shortcomings.

LLMs—including models like o1 that “think” before responding—seem incapable of learning new concepts at inference time. LLMs learn a great many concepts at training time. But if a concept wasn’t represented in an LLM’s training data, the LLM is unlikely to learn it by generalizing from examples in its context window.

In contrast, our brains continue to learn new concepts from everyday experiences long after we finish formal schooling. In other words, we stay in “training mode” throughout our lives. And this seems to make our minds far more adaptable than today’s AI models.

Leave a Comment