Some posts and data worth taking a look at before you read this. These came out since the  o3 release in late December and are the basis for what I’

Approaching the Event Horizon - by Carl Cortright

submited by
Style Pass
2025-01-11 20:00:02

Some posts and data worth taking a look at before you read this. These came out since the o3 release in late December and are the basis for what I’m about to talk about:

The first part of this post starts out kind of technical, but I’m going to try to explain it in both technical and then also simpler terms so that most folks can understand it. I think the above graph shows a massive breakthrough, that no one is talking about, but should be.

The breakthrough comes down to one thing: an algorithm that can scale inference time compute in a reasonable time complexity, thus scaling reasoning. Sounds like a really complicated concept, but the essence of it is that we taught computers to “think“, like for real. OpenAI has been hinting at this for a while, ever since Strawberry, their new reasoner, leaked.

When computer scientists talk about algorithms, they often talk about them in terms of what’s called time complexity. In the most basic of terms, time complexity is the number of cycles the computer takes to compute a solution. Things like sorting a list of your favorite foods might be relatively simple (what’s called logarithmic) and things like calculating all of the combinations of a list of items might be really hard (what’s called factorial O(!n)).

Leave a Comment