The Danger of Empathy for Robots

By on April 13th, 2018 in Articles, Robotics, Societal Impact

A December 2017 article in The Atlantic,My Son’s First Robot,” raises some interesting points about human-robot interactions. The article specifically looks at the perspectives children have about their toys, and about the animated/robotic toys. The goal is to choreograph movements and expressions that will induce genuine emotions in the toy’s owner  — Essentially to create empathy for robots.

Last decade’s toys just used classical expressions to establish relationships with a child. Today’s toy is part of the Internet of Things (IoT), cloud connected and powered by an artificial intelligence (AI) back end. Tomorrow’s toy will be using animated emotions, while interpreting those of the child with advanced facial recognition technology. The result will be increased empathy for robots, or at least for your personal robot.

This transformation has potential upsides and risks. Analytic aggregation of feedback from a child can allow the Bot to respond to the child’s emotional state. This might be comfort for the sad, encouragement for the discouraged, or even drawing in (or channeling) psychiatric help for the depressed or disturbed child.  Outcomes from improved learning, increased social skills and early problem intervention should result.

However, there are risks. There is the simple monetary benefit for toy creators, as with any online technology, to seek to increase their influence time and therefore the value of their product. This can range from improved ad revenues, to virtual enslavement. The child-toy bond may be transferred to other devices, creating expectations and influence tied to AI speakers, phones, cars, or other home appliances. The Atlantic author suggests “…robots might take over simply by expertly manipulating us into letting them win.” While I’m not expecting the robot apocalypse soon, there are more subtle dangers of empathy for robots.

One is deference to the decisions of the device. Individuals may be less critical of suggestions, or even yielding control to an autonomous system with whom they think they have a relationship. There are indications that semi-autonomous automobiles may have already crossed over the unwarranted trust line just in the early testing processes.

But what if your BFF robot starts selling you something. Perhaps nudging towards specific brands? This is a concern of IEEE AI Ethics efforts. But it doesn’t stop there. Carefully selected placement of political, propaganda, or other messages — customized “just for you” is a fairly predictable future.

What concerns keep you up at night that you don’t want your BFF Bot to know about?