Exponential increases in data and demand for improved performance to process that data has spawned a variety of new approaches to processor design and

How Memory Design Optimizes System Performance

submited by
Style Pass
2022-09-21 14:30:10

Exponential increases in data and demand for improved performance to process that data has spawned a variety of new approaches to processor design and packaging, but it also is driving big changes on the memory side.

While the underlying technology still looks very familiar, the real shift is in the way those memories are connected to processing elements and various components within a system. That can have a big impact on system performance, power consumption, and even the overall resource utilization.

Many different types of memories have emerged over the years, most with a well-defined purpose despite some crossovers and unique use cases. Among them are DRAM and SRAM, flash, and other specialty memories. DRAM and SRAM are volatile memories, meaning they require power to maintain data. Non-volatile memories do not require power to retain data, but the number of read/write operations is limited, and they do wear out over time.

All of these fit into the so-called memory hierarchy, starting with SRAM — a very fast memory that typically is used for various levels of cache. SRAM is extremely fast, but its applications are limited due to the high cost per bit. Also at the lowest level, and often embedded into an SoC or attached to a PCB, NOR flash typically used for booting up devices. It’s optimized for random access so it does not have to follow any particular sequence for storage locations.

Leave a Comment