We're excited to launch INTELLECT-1, the first decentralized training run of a 10-billion-parameter model, inviting anyone to contribute compute

INTELLECT–1: Launching the First Decentralized Training of a 10B Parameter Model

submited by
Style Pass
2024-10-11 19:30:05

We're excited to launch INTELLECT-1, the first decentralized training run of a 10-billion-parameter model, inviting anyone to contribute compute and participate. This brings us one step closer towards open source AGI.

Recently, we published OpenDiLoCo, an open-source implementation and scaling of DeepMind’s Distributed Low-Communication (DiLoCo) method, enabling globally distributed AI model training. We not only replicated and open sourced this work but successfully scaled it to the 1B parameter size.

Now, we are scaling it up by a further 10× to 10B-parameter model size, ~25x from the original research. This brings us to the third step in our masterplan: to collaboratively train frontier open foundation models: from language, agents to scientific models.

Our goal is to solve decentralized training step-by-step to ensure AGI will be open-source, transparent, and accessible, preventing control by a few centralized entities and accelerate human progress.

Leave a Comment