The current AI Risk Management Framework from the Department of Commerce and the Defense Department’s Data, Analytics, and Artificial Intelligence Adoption Strategy do not adequately address the unique challenges of deploying AI for low-frequency, high-impact scenarios, particularly the nuclear weapon decision-making process. These scenarios are characterized by data scarcity due to the rarity of events relating to nuclear weapons use, and static model deployment, where there are few or only simulation-based opportunities for updates to AI models used in nuclear decision making.
In the nuclear context, where strategic risks differ markedly from other applications of AI, the overarching governance question should be: “Does this AI application increase the risk of nuclear war?” To answer this, the Nuclear Posture Review should incorporate specific data governance to manage data scarcity, an AI risk-management framework to mitigate unique nuclear-related risks, and comprehensive recovery plans to ensure human control in the event of AI system failures or anomalies.
Globally, nuclear stability is under threat due to conflicts in Europe and the Middle East, China’s expanding delivery capabilities and arsenal, and the potential for other authoritarian regimes to acquire nuclear weapons. At home, modernizing the United States’ aging nuclear command, control, and communications (NC3) systems is necessary, and integrating AI to achieve this is inevitable. But if done incorrectly, this modernization could undermine, rather than enhance, nuclear stability.