In context: The first iteration of high-bandwidth memory (HBM) was somewhat limited, only allowing speeds of up to 128 GB/s per stack. However, there was one major caveat: graphics cards that used HBM1 had a cap of 4 GB of memory due to physical limitations. However, over time HBM manufacturers such as SK Hynix and Samsung have improved upon HBM's shortcomings.
HBM2 doubled potential speeds to 256 GB/s per stack and maximum capacity to 8 GB. In 2018, HBM2 received a minor update called HBM2E, which further increased capacity limits to 24 GB and brought another speed increase, eventually hitting 460 GB/s per chip at its peak.
When HBM3 rolled out, the speed doubled again, allowing for a maximum of 819 GB/s per stack. Even more impressive, capacities increased nearly threefold, from 24 GB to 64 GB. Like HBM2E, HBM3 saw another mid-life upgrade, HBM3E, which increased the theoretical speeds up to 1.2 TB/s per stack.
Along the way, HBM slowly got replaced in consumer-grade graphics cards by more affordable GDDR memory. High-bandwidth memory became a standard in data centers, with manufacturers of workplace-focused cards opting to use the much faster interface.