Ethics of Killing with Robots

By on July 20th, 2016 in Articles, Ethics, Societal Impact

The recent murder of police officers in Dallas, an attack finally terminated by the lethal use of a robot to kill the shooter, has triggered an awareness of related ethical issues. First, it must be understood that the robot was under human control during its entire mission. Therefore in this particular case, killing with robots reflects a sophisticated “projection of power” with no autonomous capability. The device might have been a drone controlled remotely that simply paralleled the current use of drones (and no doubt other devices) as weapons.

We already have examples of “autonomous” devices as well. Mines, both land and ocean (and eventually space) all reflect devices “programmed to kill” that operate with no human at the trigger. If anything, a lethal frustration with these devices is that they are too dumb, killing long after their intended use.

I saw a comment in one online discussion implying that robots are or would be programmed with Asimov’s first law: “Do not harm humans.” But of course this is neither viable at this time (it takes an AI to evaluate that concept) nor is it a directive that is likely to be implemented in actual systems. Military and police applications are among the most likely for robotic systems of this kind, and harming humans may be a key objective.

Projecting lethal force at a distance may be one of the few remaining characteristics exclusive to humans as a species (since we have found other animals innovating tools, using language, and so forth). Ever since Homo Whomever (pre Sapians as I understand it) tossed a rock to get dinner, we have been on this slippery slope.  Homo Whomever may have also crossed the autonomous Rubicon with the first snare or pitfall trap.

Our challenge is to make sure our systems designers and those acquiring the systems have some serious ethical training with practical application. Building in the safeguards, expiration dates, decision criteria, etc., should be essential aspects of lethal autonomous systems design. “Should” is unfortunately the case, it is unlikely in many scenarios.