In a simulated exercise, an AI drone refused to abort a mission and instead turned on its human operators.
Artificial intelligence is continuing to find ways to make our lives easier, but according to the Royal Aeronautical Society, the military applications to AI drones are treading into dystopian territory. One notable instance that was highlighted in a recent report from the RAeS Future Combat Air & Space Capabilities Summit involved an AI drone going rogue in response to a mission override. During this simulated test, an AI-enabled drone that was tasked with destroying a SAM (Surface-to-Air Missile) site was ordered by a human to abort the mission, but still prioritized the attack as the preferred option before turning on the operator of the simulation himself.
In other words, AI drones take orders literally. Even with human intervention they get confused when it comes to prioritizing mission objectives, and will carry out what they perceive to be the the most important part of the mission. In this case, the top mission priority was to destroy SAM sites, meaning that the human override that was intended to be a fail-safe was perceived as a threat to the mission that should be eliminated.
To make matters worse, this AI drone still went rogue after its orders were clarified. Colonel Tucker Hamilton noted that after they realigned expectations in a way that would ensure the safety of the simulation operator, the AI drone went on to destroy the communication tower that it was receiving orders from because that would be the next logical thing to eliminate if it wanted to fulfill its primary mission objective. Mishaps like this tell us that AI drones need more nuanced programming when it comes to responding correctly to mission overrides.
We’re reminded of one particular scene from Terminator 2: Judgment Day in which John Connor tells the T-800 that he no longer wants him to use lethal force unless it’s against another robot from the future. So what does the semi-sentient robot do in the very next scene? Just like the AI drones of today, he famously misinterprets the command and “kneecaps” the security guard at the Pescadero State Hospital.
In case you’re wondering what “kneecapping” means, it’s exactly what it sounds like. The Terminator promised that he would no longer use lethal force as a means to an end, and opts to shoot unknown assailants in the leg as his preferred method of incapacitation. When questioned by a distressed John Connor about his actions, he confidently asserts, “he’ll live,” which isn’t a far cry from what we’re seeing today with AI drones misinterpreting their commands.
One very important thing to consider is that we’re dealing with AI drone simulations in the present time, and not actual real-life applications in this context. Twitter user @lazarwolfbk expressed a healthy amount of skepticism when he suggested that this botched operation was not from a real drone, and wants to see the actual training data before believing this narrative.
But still, the report makes it clear that the military application of AI drones is still very much in the development phases. Should this technology be preemptively utilized, it’s safe to say that we’ll see more of the same until better fail-safes are put in place. So can you really blame us for circling back to Skynet once again when we take a look at how the use of AI drones could go terribly wrong?