It is the complete solution for AI compute, powered by the world’s largest chip, co-designed with Cerebras Software so it’s simple to program, and packaged in an innovative system that fits directly into your infrastructure.
Powered by the 2nd generation Wafer-Scale Engine (WSE-2), CS-2 has greater compute density, more fast memory, and higher bandwidth interconnect than any other datacenter AI solution.
Easily programmable with leading ML frameworks, CS-2 helps industry and research organizations unlock cluster-scale AI performance with the simplicity of a single device. Faster time to solution with greater power and space efficiency.
The right solution for AI goes beyond the table stakes of designing a flexible core optimized for sparse linear algebra computations (though we did that too).
Today’s state-of-the-art models take days or weeks to train. Organizations often need to distribute training across tens, hundreds, even thousands of GPUs to make training times more tractable. These huge clusters of legacy, general-purpose processors are hard to program and bottlenecked by communication and synchronization overheads.