Ethics of Robotic Deception

By on October 9th, 2018 in Editorial & Opinion, Ethics, Magazine Articles, Robotics, Social Implications of Technology

What if you could no longer believe what your robot assistant was telling you was the truth? Are there circumstances under which that would be acceptable? What if it was for your own good? The time of robotic deception is rapidly approaching. We are being bombarded regarding the inherent ethical dangers of the approaching robotics and AI revolution, but far less concern has been expressed about the potential for robots to deceive human beings.

Deception according to the Turing test for AI is a hallmark characteristic of intelligence, and philosophers such as Dennett [3] have stated “another price you pay for higher-order intentionality is the opportunity [for] … deception.” Our working definition of deception (for which there are many) is “deception simply is a false communication that tends to benefit the communicator” [1]. Several robotics researchers have considered the role of deception for both agent survival [4] and human-robot interaction [8], including our group.

Left unchecked, you may not be able to believe or trust your own intelligent devices.

We have successfully demonstrated the value of biologically-inspired deception in four separate cases as applied to robotic systems: 1) pursuit-evasion using interdependence theory when hiding from an enemy [12]; 2) misdirection based on behavioral changes [6]; 3) feigning strength when it does not exist [2], and 4) deception used for the benefit of the mark [7]. The response to our research at times has been quite striking, ranging from accolades (being listed as one of the top 50 inventions of 2010 by Time Magazine [9] to damnation (“In a stunning display of hubris, the men … detailed their foolhardy experiment to teach two robots how to play hide-and-seek” [10], and “Researchers at the Georgia Institute of Technology may have made a terrible, terrible mistake: They’ve taught robots how to deceive” [5]. This spectrum of response is quite striking. Perhaps, it is where deception is used that is the hot button for this debate.

For military applications, it seems clear that deception is widely accepted. Sun Tzu in the Art of War said that “All warfare is based on deception,” while Machiavelli in the Discourses stated to the effect that “Although deceit is detestable in all other things, yet in the conduct of war it is laudable and honorable.” Indeed, the U.S. Army [11] has a Field Manual on the subject.

The dangers outside of the military are quite real. And of course, after its development, how is it ensured that it is only used in the context it was designed for? Is there an inherent fundamental right, whereby humans should not be lied to or deceived by robots? Kant’s categorical imperative clearly indicates that lying is fundamentally wrong, as is taught in most introductory ethics classes. But from a consequentialist point of view there are times when deception has societal value, even apart from the military (or adversarial sports), perhaps in calming down a panicking individual in a search and rescue operation or in the management of patients with dementia, with the goal of enhancing that individual’s survival. In this case, even from a rights-based approach, the intention is good, let alone from a utilitarian or consequentialist formulation. But even then, does that warrant allowing a robot to possess such a capacity?

The point here is not to argue that robotic deception is ethically justifiable or not, but rather to help generate discussion on the subject, and consider its ramifications. As of now there are absolutely no guidelines for researchers in this space, and it indeed may be the case that some should be created or imposed, either from within the robotics community or from external forces. In particular, the IEEE Global Initiative on Ethics of Intelligent and Autonomous Systems is now confronting these questions among many others. But the time is coming, if left unchecked, you may not be able to believe or trust your own intelligent devices. Is that what we want?

Author Information

Ronald C. Arkin

Ronald C. Arkin is with the Mobile Robot Laboratory, Georgia Institute of Technology, 85 5th St. NW, Atlanta, GA 30332 U.S.A.

Acknowledgement

This research was supported by the Office of Naval Research under MURI Grant #N00014-08-1-0696. The author also thanks Alan Wagner, Jaeeun Shim-Lee, and Justin Davis for their contributions.

To access the full version of this article, including references, click HERE.