One year after supercomputers worked together to fight COVID, it’s time to broaden the partnership to prepare for other crises
Last spring, as the world was coming to grips with the frightening scale and contagion of the COVID pandemic, scientists started to make rapid progress in understanding the disease. For many discoveries, progress was aided by world-class supercomputers and data systems, and research results advanced with unprecedented efficiency—from understanding the structure of the SARS-CoV-2 virus to modeling its spread, from therapeutics to vaccines, from medical response to managing the virus’s impacts.
Computer-based epidemiology models have informed public policy in the United States and in countries around the globe, and newly studied transmission models for the virus are being used to forecast resource availability and mortality stratified by age group at the county level. Artificial intelligence and machine learning approaches tackled drug screening to find candidate medicines from trillions upon trillions of possible chemical compounds, and differential gene expressions among patient populations have been analyzed with important implications for treatment planning. Structural modeling of the virus has also led to new insights, speeding the development of vaccines and antigens.
Long-term investments in basic research and infrastructure, and the capacity to quickly leverage resources to respond to crisis, underlie the COVID-19 High-Performance Computing (HPC) Consortium that delivered those results. However, this pandemic will not be the last crisis we face, and one critical lesson learned is the importance of preparing for future emergencies.