Foreign Policy Blogs

Why Do We Love (or Hate) Drones?

Photo Credit: Corporal Steve Follows RAF

Photo Credit: Corporal Steve Follows RAF

A couple of months ago I attended DARC, the Drones and Aerial Robotics Conference, at New York University. The conference brought together a mishmash of computer nerds, drone aficionados, engineers, policy wonks and a handful of (mostly friendly) robots.

One of the strangest debates—one that is playing itself out among the American public and policymakers and on the local, state and federal level—is over the “goodness” or “evilness” of what is essentially a very large remote controlled airplane. Half way into the first day during the “Life Under Drones” panel, this debate came to a head.

“How can we get people to stop thinking drones are cool?” asked one protestor, who had previously been hanging out with a model UCAV outside of the NYU Skirball Center.

Another conference participant rebutting with a follow-up: “How can we get people to stop thinking drones are evil?”

While both questions are unnecessarily polemical, they raise an important question: Why do we focus on the inherent “goodness” or “evilness” of drones?

We’re afraid of them.

Robots, and hence drones, have always been a source of fear and awe. From Metropolis to Asimov, this awe and fear of robots has pervaded popular culture. And it’s not the product of being a luddite (although in some cases it very well could be). It’s:

A fear of a revolution: Or, more accurately, we’re afraid of them going “against” what we’ve programmed to do. Take, for instance, the Maschinemensch, the gynoid (or female robot and female android), that appears in both the novel and film adaptation of “Metropolis,” who was the first robot to ever appear on the silver screen. Rotwang, her creator, appreciates her coolness; for him, the Maschinemensch was either to be an object of desire (Hel, a lost lover) or, as is ultimately the case, a revolutionary figure. Rotwang’s Maschinemensch Maria incites rebellion in the city, causing a worker’s uprising that leads to the destruction of the “Heart Machine,” or the central power source for the city.

While it may be decades until an autonomous system pulls a HAL or Maria on us, this is not an unwarranted fear. In 1979, the first “death by robot” in the United States took place in a factory, where Robert Williams, an employee at Ford Motor Company, was crushed by a robotic arm in the assembly line. In 2007, an anti-aircraft cannon killed nine soldiers and wounded 14 others in South Africa due to a software glitch. While most drones are controlled by human operators, the possibility of a system being hacked (which is child’s play when encryption isn’t used) or malfunctioning is still there.

A fear of removing humans from the equation: The Pentagon will continue to throw out the party line—”fewer [American] lives are lost”—until the cows come home. At the same time, there is an understanding that “taking the humanity out of war” could be potentially problematic—legally, morally, politically, or otherwise. Drones and autonomous/semi-autonomous machines cannot make the same moral calculations as a human soldier. A robotic cannon has no qualms with shooting its fellow soldiers—cf., South Africa—and there’s no way to hold it morally accountable for its actions. Fixing the accountability gap requires some legal maneuvering and is doable; programming a X-47 to engage in a complex moral calculus is another monster entirely.

A fear of the power they give us (and the power we give them): The concerns surrounding the haziness of “signature strikes,” targeting of American citizens abroad, and surveillance capabilities of UAVs all signal a more general fear of projecting American power beyond its limits. Drones give us the ability to be ever-present without wasting resources or putting lives at risk. They allow us to create a massive gap in terms of safety and distance between the applying power (the one operating the drones) and the population on the ground.

Likewise, we’re afraid of the power we’ve given them. Asimov’s “Three Rules of Robotics”—a robot must do no harm, a robot must obey humans, and a robot must protect itself without violating laws one and two—illustrate a need to subjugate out of fear the power dynamic could be easily reversed. Robots are allowed to be “autonomous” inasmuch as they don’t disrupt the natural order of things—humanity over allA fear of deviation from these laws—and thereby a robot’s natural place in the world—is what drives humanity’s fear of robotics in Asimov, not simply the machines themselves.

However, explaining our uneasiness with drones is only part of the equation. The next is figuring out how to get over it.

 

Author

Hannah Gais

Hannah is assistant editor at the Foreign Policy Association, a nonresident fellow at Young Professionals in Foreign Policy and the managing editor of ForeignPolicyBlogs.com. Her work has appeared in a number of national and international publications, including Al Jazeera America, U.S. News and World Report, First Things, The Moscow Times, The Diplomat, Truthout, Business Insider and Foreign Policy in Focus.

Gais is a graduate of Hampshire College in Amherst, Mass. and the Institute for Orthodox Christian Studies, where she focused on Eastern Christian Theology and European Studies. You can follow her on Twitter @hannahgais