Claiming the no. 1 position in AI supercomputing, the U.S. National Energy research Scientific Computing Center today unveiled the Perlmutter HPC system, a beast of a machine powered by 6,159 Nvidia A100 GPUs and delivering 4 exaflops of mixed precision performance.
Perlmutter is based on the HPE Cray Shasta platform, including Slingshot interconnect, a heterogeneous system with both GPU-accelerated and CPU-only nodes. The system is being installed in two phases – today’s unveiling is Phase 1, which includes the system’s GPU-accelerated nodes and scratch file system. Phase 2 will add CPU-only nodes later in 2021.
“That makes Perlmutter the fastest system on the planet on the 16- and 32-bit mixed-precision math AI uses,” said Dion Harris, senior product marketing manager in a blog released today. “And that performance doesn’t even include a second phase coming later this year to the system based at Lawrence Berkeley National Lab.”
Based on the Top500 list of the world’s most powerful supercomputers, the current top-ranked system is Fugaku, jointly developed by Japan’s RIKEN scientific institute and Fujitsu.