Those who require high performance computing (HPC) resources tend to set their own rules when it comes to systems. Highly tuned for blazing fast compu

For HPC Cloud, The Underlying Hardware Will Always Matter

submited by
Style Pass
2022-06-22 16:00:07

Those who require high performance computing (HPC) resources tend to set their own rules when it comes to systems. Highly tuned for blazing fast computation and communication with software stacks optimized to match, these users have little in common system-selection wise with the average enterprise running database or transactional applications.

It stands to reason these HPC users extend these habits no matter where they run, including on public cloud resources. The number of HPC applications running in cloud environments has steadily grown and now the scale at which they operate is growing too, especially with new demands from increased data volumes and adding AI/ML into the workload mix.

For a large contingent of those ordinary enterprise cloud users, the belief is that a major benefit of the cloud is not thinking about the underlying infrastructure. But, in fact, understanding the underlying infrastructure is critical to unleashing the value and optimal performance of a cloud deployment. Even more so, HPC application owners need in-depth insight and therefore, a trusted hardware platform with co-design and portability built in from the ground up and solidified through long-running cloud provider partnerships.

These HPC users understand co-design, optimization, and what specific enhancements to both ISV and open source codes can yield when tweaked for certain hardware. In other words, the standard lift-and-shift approach to cloud migration is not an option. The need for blazing fast performance with complex parallel codes means fine-tuning hardware and software. That’s critical for performance and for cost optimization, says Amy Leeland, director of hyperscale cloud software and solutions at Intel.

Leave a Comment