Editor’s note: This article is part of the series “Compete and Win: Envisioning a Competitive Strategy for the Twenty-First Century.” The series

With Artificial Intelligence, Short-Term Risk Aversion is Long-Term Risk Seeking

submited by
Style Pass
2022-06-22 07:00:06

Editor’s note: This article is part of the series “Compete and Win: Envisioning a Competitive Strategy for the Twenty-First Century.” The series endeavors to present expert commentary on diverse issues surrounding US competitive strategy and irregular warfare with peer and near-peer competitors in the physical, cyber, and information spaces. The series is part of the Competition in Cyberspace Project (C2P), a joint initiative by the Army Cyber Institute and the Modern War Institute. Read all articles in the series here.

On November 27, 2020, Iran’s top nuclear scientist was assassinated. The initial accounts differed wildly, and it took roughly ten months for the New York Times to break the real story. In prose that could have come from a sci-fi novel, the world learned that Israeli intelligence operatives had carried out the assassination with “a high-tech, computerized sharpshooter [rifle] kitted out with artificial intelligence and multiple-camera eyes, operated via satellite and capable of firing 600 rounds a minute.” A more salient, tactical manifestation of autonomous capabilities is drone warfare. Particularly lethal is the American-made, multipurpose, loitering munition Altius 600 that has a range of 276 miles and can operate at a ceiling of twenty-five thousand feet, providing intelligence, surveillance, and reconnaissance, counter–unmanned aircraft systems effects, and precision-strike capabilities against ground targets. Many systems like the Altius “will use artificial intelligence to operate with increasing autonomy in the coming years.” But AI-enabled weapons systems are already being used for lethal targeting—for example, the Israeli-made Orbiter 1K unmanned aircraft system, a loitering munition recently used by the Azerbaijani military in the Second Nagorno-Karabakh War, independently scans an area and automatically detects and destroys stationary or moving targets kamikaze­-style. If the Orbiter 1K does not observe a target right away, it will loiter above the battlespace and wait until it does. As two instances of AI-augmented, autonomous weapons being used to kill remotely, the assassination and the drone warfare of the Second Nagorno-Karabakh War draw attention to longstanding concerns about AI-enabled machines and warfare.

Importantly, for the United States to retain its technological edge, it must prioritize AI investment and military modernization by focusing on the development of artificial intelligence and derivative technologies to secure the large, enterprise-sized, and distributed networks relied on for all warfighting functions and to the maintain tactical and strategic advantage in the current competitive environment. However, modernization must also be thoughtful and purposeful to allow for a careful consideration of the ethical and moral questions related to using autonomous systems and AI-enabled technologies in lethal military targeting. Military ethics naturally evolve alongside military technology and, as weapons and their effects become better understood with use and time, ethical considerations are revised and updated. But with AI-enabled battlefield technology, we should engage in discussions about morals and ethics before employment, and continue those discussions in parallel to the development, testing, adoption, and use of AI-enabled weapons systems. Even though a breakthrough in AI by any adversary is an existential threat to national security—and a breakthrough by the United States will likely save American lives on the battlefield—any premature adoption of such technologies in warfare presents an equally dangerous threat to our national values. However, short-term risk avoidance is really long-term risk-seeking behavior, which will result in our adversaries eventually outpacing the United States and achieving technological overmatch.

Leave a Comment