Understanding how chiplets interact under different workloads is critical to ensuring signal integrity and optimal performance in heterogeneous design

Partitioning In The Chiplet Era

submited by
Style Pass
2024-10-06 12:00:09

Understanding how chiplets interact under different workloads is critical to ensuring signal integrity and optimal performance in heterogeneous designs.

The widespread adoption of chiplets in domain-specific applications is creating a partitioning challenge that is much more complex than anything chip design teams have dealt with in previous designs.

Nearly all the major systems companies, packaging houses, IDMs, and foundries have focused on chiplets as the best path forward to improve performance and reduce power. Signal paths can be shortened with proper floor-planning, and connections between chiplets and memories can be improved to reduce resistance and capacitance, which in turn can reduce the overall power. But so many new features are being added into devices — specialized accelerators and memories, CPUs, GPUs, DSPs, NPUs — that mapping out optimal data paths, load balancing options, and workarounds for aging signal paths is an enormously complicated task, and one that may change from one design to the next.

Underlying this shift is a continuing slowdown in scaling benefits. Shrinking features to add more compute density into a planar SoC is no longer cost-effective for many applications, and it limits the overall number of features.

Leave a Comment