Intel was the first of the major CPU makers to add HBM stacked DRAM memory to a CPU package, with the “Sapphire Rapids” Max Series Xeon 5 processo

Microsoft Is First To Get HBM-Juiced AMD CPUs

submited by
Style Pass
2024-11-22 20:00:09

Intel was the first of the major CPU makers to add HBM stacked DRAM memory to a CPU package, with the “Sapphire Rapids” Max Series Xeon 5 processors, but with the “Granite Rapids” Xeon 6, Intel abandoned the use of HBM memory in favor of what it would hope would be more main stream MCR DDR5 main memory, which has multiplexed ranks to boost bandwidth by nearly 2X over regular DDR5 memory.

Intel had its reasons for adding HBM memory to Sapphire Rapids. The main reason was to boost the CPU performance of the exascale-class “Aurora” hybrid CPU-GPU supercomputer that Intel created with the help of Hewlett Packard Enterprise for Argonne National Laboratory. The Aurora machine has 21,248 of the Xeon 5 Max Series CPUs packaged in 10,624 nodes that also have a total of 63,744 of Intel’s “Ponte Vecchio” Max Series GPUs. (That is two CPUs paired with six GPUs in a single node, which is about all that anyone can pack into the space of a Cray EX sled.)

The other reason to add HBM memory to the CPU was the hope that other HPC centers that were stuck on CPUs because they have not yet ported their applications to GPUs – or cannot expect to get good performance on their workloads even if they did – would see that having a CPU with a lot more memory bandwidth – on the order of 4X to 5X that of normal DDR5 memory – would significantly boost the performance of bandwidth-bound applications without the need to port those codes to GPUs.

Leave a Comment