At the recent conference on the ethics of AI-enabled weapons systems at the U.S. Naval Academy, well over half the talks discussed meaningful human control of AI to some extent. If you work among the AI ethics community, and especially among those working on AI ethics and governance for the military, you are hard-pressed to find an article or enter a room without stumbling on someone literally or metaphorically slamming their fist on the table while exalting the importance of human control over AI and especially AI-enabled weapons. No meeting or paper on the ethics of AI-enabled weapons is complete without stressing the importance of having a human in the loop, whether in the now outdated sense of meaningful human control, or in the recently more popular sense of appropriate human judgment. It often seems like everyone agrees that having human control over AI weaponry is a good thing. But I am not so sure that “meaningful human control over AI” is the panacea everyone seems to make of it.
Arguments in favor of meaningful human control of AI-enabled weapon systems usually focus on safety, precision, responsibility, and dignity. Centrally, proponents of human control over AI-enabled weapons systems don’t think that lethal targeting decisions should be left to AI. This is why the examples used to stress the importance of meaningful human control often focus on weapons systems that use AI for targeting decisions — systems like Collaborative Operations in Denied Environments (CODE) or HARPY. According to publicly available information, and Paul Scharre’s description of it in the book Army of None, CODE’s purpose is to develop “collaborative autonomy — the capability of a group of unmanned aircraft systems to work together under a single person’s supervisory control.” This control can take several forms depending on whether the system is operating in a contested electromagnetic environment (more contested means more reliance on autonomous features). Usually, the human operator gives high-level commands like “orbit here,” “follow this route,” or “search and destroy within this area.” In cases of search-and-destroy missions, once the airborne vehicles find enemy targets, “they cue up their recommended classification to the human for confirmation,” Scharre reports. In addition, after target confirmation, the system asks for authorization to fire. This means that there are at least three places where a human exerts control over the system: first when drawing the box around the area the drones should search for targets, next when confirming the target, and finally when accepting the plan of attack. Proponents of meaningful human control see this as a great example of leveraging all that is good about AI, while assuring human control — thus minimizing accidents (assuring safety) and thus also identifying who to hold responsible when things go wrong (assuring responsibility assignment).