How I learned to stop worrying about and start loving killer robots

Illustration by Rachel Fischman
Illustration by Rachel Fischman

The term “killer robots” evokes terrifying images, from the sci-fi terminators attempting to wipe out humanity to the real-life predator drones conducting airstrikes in Pakistan. While remote-controlled drones have been used by militaries for some time, ethicists and human rights advocates are becoming increasingly concerned about fully autonomous weapon systems. The Catholic peace movement Pax Christi, part of the unoriginally named Campaign to Stop Killer Robots, recently warned that killer robots will push humanity further from an “ethical and just society.”

Killer robots are distinct from the drones that are currently in military use in that they can make the decision to kill on their own. So far, such technology is still a long ways off from becoming commonplace. The Predator and Reaper drones used by the US military require numerous humans in a complex chain of command to make the decision to kill, from the President who approves the kill list to the pilot who pulls the trigger. These drones are not killer robots in the sense that they are simply remote-controlled by humans.

There are a few weapons that can make the decision to kill, such as South Korea’s SGR-A1, a sentry gun designed to guard the DMZ. Yet, such weapons are still rudimentary and are thus relegated to performing relatively simple tasks, usually far away from population centers. Even with advances in robotics and artificial intelligence it is unlikely that military robots will ever completely act without humans influence. While machines can process information faster, humans have a unique ability to innovate and adapt that computers cannot and may never replicate. In spite of advanced chess programs like Deep Blue, the best chess players remain so-called “chimeras” which pair human players with computer algorithms. Such chimeras are the most likely next-step for automation on the battlefield, with human officers commanding robot soldiers.

Nonetheless, it is true that robots become exponentially more intelligent in the near future thanks to Moore’s Law, and this will allow for unmanned weapons to exercise far greater autonomy. Even with human officers giving orders, robots will still make some decisions on their own, including the decision to pull the trigger. This frightens human rights activists, who have begun to call for a global ban on such weapons. Their arguments fall under two main points: first, killer robots somehow violate human dignity and second, robots lack the moral compass of human beings.

The first argument is downright ridiculous. You are just as dead if you are killed by a robot as you are if you are killed by a human being. I fail to see how death at the hands of a machine is worse than any of the other ways of being killed.

The second argument is overly harsh toward robots and, frankly, overly generous to human beings. Humanity is not a kind species; on the contrary, history shows that humans are no strangers to savagery and violence. Yes, we are capable of love, compassion, and mercy but more often we succumb to rage, hatred, and sadism. Humans kill all the time for all sorts of reasons; we kill because we don’t like how color of the other person’s skin; we kill because we think an invisible man in the sky told us to; we even kill for fun.

The Holocaust, the Rape of Nanking, the Crusades, slavery, and the Rwandan Genocide are just a few of the sins of humanity. As someone who has smelled the stench of several tons of human hair at Auschwitz I am well aware of what human beings are capable of. Robots didn’t fill the ranks of the SS, the Khmer Rouge, or the Islamic State. Robots didn’t invent numerous ways of slowly and painfully killing people like the Wheel. Robots didn’t slaughter millions of Native Americans and then make a holiday about it. Humans did.

Opponents argue that since robots blindly follow orders, it will be easier for evil humans to carry out atrocities. Again, this makes a fallacious and overly optimistic assumption about humans. While there are always a few contentious objectors, most people follow authority figures like sheep, even when told to do horrible things. The Nazis had no problem finding thousands of loyal followers to carry out the Holocaust. Members of the Wehrmacht and SS who refused to carry out their orders were few and far between.

The vast majority did what they were told, sometimes because they legitimately believed in the cause, sometimes because they feared reprisal and sometimes, as was the case with Adolf Eichmann, because they were just “doing their job.” Indeed, the famous Milgram experiment has demonstrated the propensity of humans to follow orders even when told to do terrible things. In the experiment, the subject is told to deliver conduct ask another “volunteer” (really an actor) a series of questions and deliver electric shocks when the answer incorrectly. The intensity of the shocks increases after each wrong answer and eventually the “volunteer” starts to scream in pain and complain of a heart condition before ceasing to be responsive. The experimenter then orders the subject to continue to deliver the maximum electric shock and most people comply. Milgram’s work has been replicated and expanded upon by other psychologists, who have found similar results.

When humans aren’t killing because they are ordered to do so, they are killing without orders. The perpetrators of the My Lai Massacre acted on their own volition. In 1945, Japanese troops slaughtered 100,000 civilians in Manila after the commanding officer had ordered them to withdraw from the city. Numerous UN peacekeepers have been implicated in raping and killing the very people they are supposed to protect. War can have a savage effect on the human brain, with the stress and dehumanization of combat sometimes pushing soldiers past the breaking point.

Game of Thrones said that there’s a beast in every man and it stirs when you put a sword in his hand. Robots, however, are not men. They don’t give in to anger, or fear. They don’t rape or pillage. They aren’t motivated by revenge, sadism, or bigotry. A war fought by killer robots might not be a clean one, but it would be a lot better than a war waged by killer humans.

In fact, robots are actually better at following the rules of war. Preventing war crimes would be as simple as converting the Geneva Convention into code and uploading it to the hard drive of a robot. Machines would follow these rules to the letter, unlike human soldiers who might not understand or care about the rules of war, as was the case at My Lai.

Furthermore, robots are far less accident prone than humans. Humanity is a fallible bunch; our eyes only see a small fraction of the electromagnetic spectrum, our ears only hear a small range of frequencies, our brains are easily overloaded with chemicals that provoke emotional and irrational responses. Thus, even when we mean well, we often make horrible mistakes, especially in war. Recently, an American AC-130 accidentally attacked a Medecins Sans Frontieres hospital in Afghanistan. In 2007, two journalists were gunned down by an Apache gunship after the pilots mistook their camera gear for weaponry. In 1788, a false alarm about a Turkish attack caused an Austrian army to go to battle with itself, killing or wounding 10,000 men in history’s worst recorded incident of friendly fire.

While computers aren’t perfect, they often exhibit better judgment than humans. They remain perfectly cool in crisis situations and don’t succumb to emotions. They can collect data that our limited senses can’t. Indeed, Chinese researchers have developed an algorithm that is better at detecting faces than humans are and Google’s self-driving cars are far less accident-prone than human drivers. It is unlikely that killer robots would have accidentally bombed a hospital or gunned down journalists.

War is a messy business and it always will be. However, it is always made messier by the involvement of humans. Human nature is incredibly imperfect with a propensity to make dumb mistakes and commit acts of cruelty. Killer robots might not be perfect, but they’re a whole lot better than killer humans.

William Kim

William Kim

I am the editor of the Opinion Section. I enjoy watching netflix, listening to Danger Zone and taking long, romantic walks to the fridge. Some people call me Wild Bill

Comments

comments

Leave a Reply

Your email address will not be published. Required fields are marked *