Bridging the Moat - Arthur Rosa - WQ6E

submited by
Style Pass
2025-01-24 23:30:05

OpenAI has long held the belief that they have a unique position in the market and therefore a very wide and deep moat protecting them from any competition coming along and eating their lunch. That moat, seemingly, is due only to the massive and ever-increasing cost of training one of their AI models. Their belief, then, was that they were so far ahead of everyone else and had so much more capital available to them that they could withstand any onslaught and simply outspend the competition. They could therefore justify any capital expenditure as a means to keep the moat filled and sustain their market “leadership”.

What OpenAI and the rest of the market seemingly didn’t foresee was the fact that you don’t really need to spend $2+ billion to train a model - you can train one for about $6 million over the span of just a couple months. The latest model (R1) from Chinese AI startup DeepSeek was trained in a very short time using open datasets, and released for free to anyone who wants it. Benchmarks of these models are difficult to trust fully but initial user testing seems to indicate that it outperforms OpenAI’s o1 models in most metrics and blows away similar offerings from Anthropic.

DeepSeek-R1 spells the end of OpenAI’s market dominance. A small startup in China was able to in the span of a few months create a new model that matches or outperforms o1 using 2+ year old hardware (Nvidia H800, which is a China-specific model of the 3 year old H100 chip) for only $6 million in energy and hardware costs.

Leave a Comment