TED Talk Warns Of The Dangers Of Autonomous Robots Making Lethal Decisions

By Joelle Renstrom | Updated

This article is more than 2 years old

Taranis DroneAdvancements in robotics occur at such a breathtaking pace that it’s difficult to keep up. Nowhere is this more prevalent than in warfare. Right now, military drones require communication and commands from humans. In short, the robots can’t make their own decisions. Yet.

Daniel Suarez, a science fiction writer and software developer, recently delivered a TED Talk where he argued that fully autonomous military robots are coming, and that we need to start preparing now. What Suarez fears most is that autonomous military robots will take decision-making out of the hands of humans, and thus, take the human out of war, which would change warfare entirely.

Per a 2012 directive from the U.S. Department of Defense, a human must be present in all lethal decisions, which effective bans the U.S. military from using autonomous weapons. But Suarez worries that the choice to keep life-and-death decisions out of the hands of robots will change. He lays out an argument in which he identifies three main forces threatening to shift decision-making from humans to robots.

  • We’re increasingly overloaded with data from drone and surveillance equipment, to the point that we’ll have to rely on machines to scan the information and flag anything important, meaning that robots will tell humans what to look at, rather than the other way around.

U.S. Sentinel Drone

  • Electro-magnetic jamming compromises the communication between a drone and its human operator. Suaraz references the GPS spoofing that caused a U.S. Sentinel drone to be captured by Iran in 2011. This kind of attack can affect any remotely piloted drone, which means that in order to remain functional, the drones will send few signals of their own and will be capable of ignoring external signals. They’ll also have to make decisions in order to achieve their objective.
  • The scariest factor Suaraz cites is plausible deniability. He argues that in our current environment of “cyber espionage,” autonomous drones will likely be produced under contract in secret, and that there will be a black market for them. Because of the covert nature of the production, distribution, and operation of these drones, it will be next to impossible to identify who made and used them. This could ultimately lead to anonymity in war, which “could tilt the geopolitical balance on its head.” He worries that the nature of warfare could change from defense to offense (kind of like the NBA), and the ability to engage in warfare won’t be restricted to countries — any individual or private organization with the means to obtain these weapons will be able to use them without worrying too much about getting caught.

This, he argues, would change not just warfare, but the social landscape and democracy itself. Ultimately, autonomous weapons compromise responsibility and transparency, and put “too much power in too few hands.”

Suaraz also worries that citizens of high-tech societies will actually be at greater risk than citizens of low-tech societies because the autonomous machines feed on data, which powers high-tech societies. Instead of targeting people with ads, autonomous machines could literally target rebels, free-thinkers, political groups — anyone they want. Autonomous weapons “could make lethal action an easy choice,” Suaraz says.

Guardium-8

So…good times, eh? The scenarios he lays out are grim at best, apocalyptic at worst. So what does he propose we do?

He doesn’t argue that we should ban autonomous drones altogether, as they have important environmental and search-and-rescue capabilities which we want to use while at the same time protecting ourselves from the kinds of weapons he describes. “The secret will be transparency,” he says, which means that robots shouldn’t expect privacy. He suggests that each autonomous drone have cryptographically signed identification number, kind of like what happens when scientists tag animals. That way, we could track the movements of these drones through public spaces, and citizens would know if and where the drones are around them. He suggests that if rogue drones do emerge, they could be identified by autonomous police drones but then dealt with by humans.

Suaraz also calls for an international treaty on robotic weapons, much like existing treaties that prohibit nuclear and biological weapons. To do that, he suggests making the 2012 directive permanent, and that perhaps it can serve as an international model.

Despite claims that autonomous killing drones are “years and years away,” Suaraz insists that time is of the essence, as 70 countries currently have or are developing remotely piloted combat drones.

I think I prefer the cuddly robots that make you feel better. Sounds like we might need them.

You can watch Suaraz’s full TED Talk below.