Every couple of years, Lawrence Livermore National Laboratory gets to install the world’s fastest supercomputer. And thankfully the HPC center usual

“El Capitan” Supercomputer Blazes The Trail for Converged CPU-GPU Compute

submited by
Style Pass
2024-11-19 12:30:03

Every couple of years, Lawrence Livermore National Laboratory gets to install the world’s fastest supercomputer. And thankfully the HPC center usually chooses a machine that not only fulfills its mission of managing the nuclear weapons stockpile of the United States military, but also picks a mix of technologies that advances the state of the art in supercomputing.

This is what history has taught us to expect from Lawrence Livermore, and with the “El Capitan” system unveiled today at the SC24 supercomputer conference, history is indeed repeating itself. But this time is a little different because El Capitan is booting up amidst the largest buildout of supercomputing capacity in the history of Earth.

As far as we and the experts at Lawrence Livermore can tell, on many metrics El Capitan can stand toe to toe with the massive machinery that the hyperscalers and cloud builders are firing up for their AI training runs. El Capitan is a machine that is tailor made to run some of the most complex and dense simulation and modeling workloads ever created that just so happens to be pretty good at the new large language models that are at the heart of the GenAI revolution.

And thanks to the “Rosetta” Slingshot 11 interconnect designed by Cray and a core component of the EX line of systems sold by Hewlett Packard Enterprise, El Capitan already employs an HPC-enhanced, scalable Ethernet along the lines of what the Ultra Ethernet Consortium is trying to advance as the hyperscalers and cloud builders tire of paying a premium for InfiniBand networks for their AI clusters.

Leave a Comment