Preparing to Design Robots for Social Contexts

By on April 22nd, 2022 in Articles, Artificial Intelligence (AI), Editorial & Opinion, Ethics, Human Impacts, Magazine Articles, Robotics, Social Implications of Technology, Societal Impact

Clinton J. Andrews

 

Educational programs in robotics have focused mostly on developing science, technology, engineering, and math skills, with recent extensions into the arts [1]. This focus has been entirely appropriate, until recently. Successful roboticists have been generalists with a specialty [2] whose careers involve both thinking and doing. Thinking (“investigative”) and doing (“realistic”) are personality traits that, when strongly correlated, predict success in computer science, engineering, and, by interpolation, robotics [3]. Industry voices confirm that roboticists need skills in systems thinking, a programming mindset, active learning, mathematics, science or other applied mathematics, judgment and decision making, good cross-disciplinary communication, technology design, complex problem solving, and persistence [4]. This list is adequate for many applications of robotic autonomous systems (“robots”).

Emerging social contexts add new requirements to the knowledge that successful roboticists need.

However, robots increasingly operate among people, and they now work alongside us in factories and warehouses, share our streets and sidewalks, clean our homes, and care for the most vulnerable among us [5]. These emerging social contexts add new requirements to the knowledge that successful roboticists need.

  • Human physiology: Roboticists need a deep appreciation of the limits and vulnerabilities of the human body to ensure human safety when designing the kinematics, navigation, and feedback systems of robots.
  • Human cognition: Many roboticists learn basics of human–machine interaction and usability. Far fewer learn foundational concepts from cognitive science on decision making by human agents, human navigation and wayfinding, human communication, and human interpretation of intentional behavior that could be used to help robots interpret human actions.
  • Moral reasoning: All autonomous systems may have ethical impacts, and arguably all should be designed to avoid unethical outcomes [6]. Designers bear some responsibility for their designs, even in a world where the autonomous systems they design eventually design other autonomous systems [7]. Roboticists thus need to be able to think through the ethical implications of their work.

All autonomous systems may have ethical impacts, and arguably all should be designed to avoid unethical outcomes.

  • Social rules: Humans are social animals who act within elaborate structures of social constraints, both formal and informal. We follow and expect others to follow norms of good behavior; we establish clear rules and codify laws; and we reproduce social structures that outlive each of us as individual humans. The emerging concern is about what (not who) is acting and how [8]. Roboticists need to incorporate into their designs a good understanding of social rules and how those evolve to incorporate innovations. This includes choosing whether to design to informal social norms as they emerge, or merely react to eventual legal requirements. As we socially construct a new reality, we will also need to establish relations of trust, responsibility, and accountability.
  • Social implications: Small changes in social practices due to the introduction of robotic applications can scale up to manifest unintended consequences within dynamic systems of collective decision making—markets, politics, and culture. Examples include displacement of less-skilled workers by robots [9] and the changing calculus of conflict due to drone warfare [10]. Roboticists need to develop skills in anticipating and litigating the future consequences of deploying their innovations at scale.

Designers bear some responsibility for their designs, even in a world where the autonomous systems they design eventually design other autonomous systems.

Much of this additional knowledge comes from the social sciences and humanities, which rely on different research methods and more contingent theories than are common in the applied natural sciences. Methods to characterize human social behavior may involve legal analysis, ethnographic observation, survey research, and behavioral experiments, alongside familiar sensing technologies. Theorizing often aspires only to be locally grounded rather than universally applicable, because human behavior varies so much by context. It is often appropriate to access such knowledge through teamwork and multidisciplinary collaboration [11].

Roboticists need to be able to think through the ethical implications of their work.

It is tempting to jump right into the design questions associated with creating socio-robotic colonies [12], releasing robots from their social isolation [13], and equipping them with social intelligence [14]. But if we envision robotics as a public interest technology that, at least as an aspiration, promotes the public good, we first ought to acquire more appropriate knowledge and skills. It should incorporate the elements discussed here and others that readers will identify.

ACKNOWLEDGMENTS

This work was supported in part by the NSF Award: Socially Cognizant Robotics for a Technology Enhanced Society (SOCRATES) under Grant 2021628 NRT-FW-HTF with advice from Rutgers University colleagues Kristin Dana, Jacob Feldman, Jingang Yi, Kostas Bekris, Hal Salzman, Pernille Hemmer, Matthew Stone, Aaron Mazzeo, and Kathy Haynie.

Author Information

Clinton J. Andrews is the President of the IEEE Society on Social Implications of Technology. He is a Professor and the Associate Dean for Research with the Edward J. Bloustein School of Planning and Public Policy, Rutgers University, New Brunswick, NJ, USA.