This is the first post in a five-part series where we focus on the basics of large language models. By the end of this series, you will have a toolkit to compare different large language models using four dimensions that we’ll cover over the next four posts:
By the end of this series, you’ll also understand the “large” in “large language models,” why OpenAI needed to raise an eye-popping $10 billion from Microsoft (the 🖥️ Compute post), why platforms like Stack Overflow and Reddit are charging for data (the 📚 Data post), and why an internal document from Google frantically discusses the looming possibility of open-source lapping them with smaller and cheaper models (the 🧠 Model Size post).
While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. — leaked internal Google doc
Machine learning models 1 do two things: they learn, and they infer. We’ll cover both learning, otherwise known as “training,” and inference.