Processor vendors are starting to emphasize microarchitectural improvements and data movement over process node scaling, setting the stage for much bigger performance gains in devices that narrowly target what end users are trying to accomplish.
The changes are a recognition that domain specificity, and the ability to adjust or adapt designs to unique workloads, are now the best way to improve performance and improve energy efficiency. While process shrinks will continue to provide some benefits — typically no more than 15% to 20% improvements in performance and power — it’s clear that banking on those improvements alone is no longer a recipe for success. Customization and intelligent optimization are now essential, and a one-size-fits-all processor strategy for most markets is obsolete.
“Two things have happened in the last few years,” said Aart de Geus, chairman and co-CEO of Synopsys. “One is the amount of data massively increased. As a matter of fact, since 2018, machine-created data dwarfs what humans are creating. At the same time, machine learning has just arrived at the point where computation is good enough. So now you can do really cool stuff with it. This has not gone unnoticed. Every vertical market is now saying, ‘I have a lot of data. What if I could do something smart with it.’ And the notion of taking a lot of data, and changing a vertical market even slightly to make it more efficient, has very big economic ramifications. Credit Suisse estimates this is a more than $40 trillion opportunity for smart everything. So people are experimenting with this, and the minute they see a little bit of success, the next question is, ‘How come your chips are so darn slow?'”