Analysis  Shortly after the launch of AMD's first-gen Epyc processors codenamed Naples in 2017, Intel quipped that its competitor had been reduced to

A closer look at Intel and AMD's different approaches to gluing together CPUs

submited by
Style Pass
2024-10-24 16:30:05

Analysis Shortly after the launch of AMD's first-gen Epyc processors codenamed Naples in 2017, Intel quipped that its competitor had been reduced to gluing a bunch of desktop dies together in order to stay relevant.

Unfortunately for Intel, that comment hasn't exactly aged well as a few short years later, the x86 giant was reaching for the glue itself.

Intel's Xeon 6 processors, which began rolling out in phases this year, represent its third generation of multi-die Xeons and its first datacenter chips to embrace a heterogeneous chiplet architecture not unlike AMD's own.

As a quick refresher on why so many CPU designs are moving away from monolithic architectures, it largely comes down to two factors: reticle limits and yields.

Generally speaking, short of major improvements in process technology, more cores inevitably mean more silicon. However, there are practical limits to how big dies can actually get - we refer to this as the reticle limit - which is roughly 800mm2. Once you bump up against the limit, the only way to continue scaling the compute is to use more dies.

We've now seen this done with a number of products - not just CPUs - which cram two large dies onto a single package. Gaudi 3, Nvidia's Blackwell, and Intel's Emerald Rapids Xeons are just a few examples.

Leave a Comment