The United States must add a third pillar to its AI strategy: systems that actively defend against malicious use, specifically AI that can fight back.

To defend against malicious AI, the United States needs to build a robust digital immune system

submited by
Style Pass
2025-08-05 23:30:02

The United States must add a third pillar to its AI strategy: systems that actively defend against malicious use, specifically AI that can fight back. Image: VideoFlow via Adobe

Artificial intelligence is delivering breakthroughs—from life-saving drugs to more efficient industries—but as a dual-use technology, it can also be misused for destructive ends. Policymakers have responded by restricting chip exports to adversaries and urging developers to build safe AI, hoping to slow misuse or enforce better norms. But too often, these efforts treat AI only as a threat to contain, rather than a tool to help solve the very risks it creates.

To confront 21st century threats, society needs to deploy contemporary tools—namely AI itself. Export controls and ethical pledges may slow competitors or promote better behavior, but they can’t keep up with a technology that’s cheap to copy, easy to repurpose, and spreading at internet speed. To stay safe, the United States must add a third pillar to its AI strategy: systems that actively defend against malicious use, specifically AI that can fight back.

Dubbed defensive AI, models built for this purpose can monitor, detect, and respond to anomalies in real time. Trained on vast trove of normal activity as well as attack patterns—like phishing emails, credit‑card fraud, and malicious DNA designs—these models learn what “normal” system behavior looks like, so they can quickly flag deviations and take steps to contain or neutralize threats. Such AI functions like a digital immune system, spotting abnormalities in real time and responding before humans even know something is wrong.

Leave a Comment
Related Posts