Foreign Policy Blogs

Of Human Rights and Robots

A robot autonomously climbing stairs. Image: NASA.

A robot autonomously climbing stairs. Image: NASA.

Since the creation of the modern international community following World War II, the prevention of war and conflict has been its major preoccupation. These goals have been achieved largely through two distinct veins of international laws and standards: international humanitarian law and international human rights law. The first Geneva Conventions set in motion the internationally accepted humanitarian “laws of war” and deal primarily with what States can and cannot do when engaged in combat. In other words, humanitarian law has grappled with defining the duties of States engaged in wars or hostilities.

The human rights side of the international legal regulation of war has focused on the rights of individuals in relation to wars. Emanating from international human rights laws the right to life has emerged as the strongest protection for civilians, noncombatants, and prisoners of war.

The right to life’s primary function is to protect against the “arbitrary” deprivation of life. Even in peacetime the international community’s definition of the right permits the use of the death penalty for criminal offences despite their abhorrence of the practice. Such instances are not considered arbitrary deprivations of life. Legal doctrines such as self-defense also carve out exceptions. Notably, the right is silent on the topic of abortion and takes no position on when life begins.

Attempts at reframing issues such as the complimentary “right to die” or the conscientious objector’s “right to not kill” have not been as successful, and the current debate over the use of Lethal Autonomous Robots (LARs) raises new questions concerning the arbitrariness of killing.

LARs are defined as,

“a weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”

Both the U.S. Department of Defense and Human Rights Watch endorse this definition. It demonstrates the difference between an LAR and other modern combat systems, such as unmanned, armed aerial drones that require human operation.

The current U.N. Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, Christof Heyns, has just released a preliminary report on the matter and called for a moratorium on the use, as well as the development of LARs until the international community can reach a consensus on the practice and either develop a working framework to regulate, or potentially ban, the use of LARs.

The underlying concern of this debate is whether to legally kill human beings it is necessary that other human beings carry it out. Advocates against the use of LARs, such as Human Rights Watch’s Campaign to Stop Killer Robots, lay out the ethical, legal, and other concerns in using LARs in their report “Losing Humanity: the Case Against Killer Robots.”

Critics were quick to point out the bias contained in the Human Rights Watch report beyond the title. Professor Michael N. Schmitt’s response largely focuses on sorting out what he sees as the egregious conflation of two principles of international humanitarian law on the part of the anti-LAR camp: per se illegal weapons (those that can never be used, such as biological weapons) and the unlawful use of otherwise lawful weapons (using a rifle, an otherwise legal weapon, to kill an innocent civilian).

Those advocating against LAR use fear the increased distance and detachment from battlefield operations. This increased gulf will allow killing to be carried out more easily, which will lead to more frequent use of force by States. Killing will be done more indiscriminately than with human actors, which will snowball into greater instances of retaliation and terrorism. Accountability, a major concern in modern hostilities, is anticipated to suffer. Lastly, having human beings behind the trigger creates inherent “safeguards,” and LARs will not have the benefit of emotion and human perception to distinguish between proper and improper targets.

The converse is equally plausible. A LAR that can distinguish between combatants and noncombatants cannot act outside of its authority and abuse, rape, torture, or otherwise mistreat humans, or thereafter lie about it, which are situations commonly known to be endemic in war zones. Their software code will set parameters on the range of actions a LAR may take, and a unit or units can be assigned to specified human operators or monitors who will be legally responsible for the LAR’s actions.

The central question for the Special Rapporteur is “whether it is not inherently wrong to let autonomous machines decide who and when to kill.” This phrasing oversimplifies the issue and neglects the other elements of human agency necessarily present, such as whether to engage in combat, whether to deploy LARs, or what setting they operate at and under how much human oversight. This condensed question refuses to consider the possibility that such systems could potentially reduce the occurrence of collateral casualties in combat.

The Special Rapporteur has made a number of recommendations, none of which suggest any further engagement with core human rights. Importantly, all research and development should stop and become transparent. This is a remarkable proposition for joint efforts of the two most secretive sectors: the military and high-technology scientific research. However, failure to continue to develop this discourse in human rights terms will erode the protections of the rights by addressing issues out of order. The Special Rapporteur’s report pushes the direction of the debate toward forcing features of humanitarian regulation into the right to life: a right to be killed by a human instead of a robot. It is problematic to force issues of international humanitarian law into the discourse of human rights law because the exceptions characteristic of the international regulation of war taint the inviolability of universal human rights.



Marc Gorrie

Marc C. Gorrie holds a BA from Sarah Lawrence College, a JD from Indiana University Maurer School of Law – Bloomington, and an LLM in international human rights law with a specialization in international labor rights law from Lund University (Sweden). He is a port welfare worker and ship visitor for the Seamen's Church Institute in Ports Newark and Elizabeth, NJ, where he also collaborates on an educational program on the Maritime Labour Convention directed at port chaplains and welfare workers. He recently contributed to an EU project on legal education and law school curricula in the Gambia, and has held a research fellowship in legal ethics, lectured on federal Indian law and American legal ethics, and worked as a disability advocate.