Ethics and System Design in a New Era of Human–Computer Interaction

By on January 2nd, 2020 in Artificial Intelligence (AI), Editorial & Opinion, Human Impacts, Magazine Articles, Social Implications of Technology, Societal Impact

Contemporary and emerging digital technologies are leading us to question the ways in which humans interact with machines and with complex socio-technical systems. The new dynamics of technology and human interaction will inevitably exert pressure on existing ethical frameworks and regulatory bodies. This will require the development of innovative approaches to Human Computer Interaction (HCI), and methods to support the design of complex socio-technical systems.

HCI is an interdisciplinary field of study based on a user-centered approach to the design of contemporary and complex computer systems [1]. HCI is fundamentally concerned with all facets of human engagement with these systems, ideally spanning initial design to post-implementation evaluation efforts. An underlying principle of HCI is the requirement the design and development of technology be for the benefit of individuals (users) and society at large, through the employment of distinct user-oriented approaches and methodologies. Designing for the benefit of users and society is generally achieved through consideration of all forms of contact between humans and machines. Analysis is traditionally through user interfaces [2], but this has been, and to some degree still is, particularly difficult given the lack of usability often associated with initial forms of personal computing [3].

As technology pervades all aspects of our existence, and Artificial Intelligence (AI) and machine learning systems become commonplace, a new era of HCI is emerging that will involve directing our focus beyond traditional approaches, to span other intricate interactions with computer-based systems. This new phase will inevitably challenge our basic understanding of what HCI means, and of how we define and design our interactions with a range of technological systems, some of which claim to be “intelligent” and are visible, while others operate covertly and remain largely undetected.

This special issue presents a variety of papers that collectively demonstrate ethical and regulatory challenges stemming from human engagement with diverse socio-technical systems, from those that are intended to be implemented on a smaller-scale and to impact individuals, to those that have consequences for entire countries, cultures, and societies.

This issue commences with two conventional HCI articles surveying and interpreting attitudes towards new technology in the form of Autonomous Weapon Systems (AWS) and social robotics. The introductory paper by Verdiesen et al. on moral values related to AWS presents the results of a survey whereby the morality of such systems is evaluated by both military personnel and the general public. The comparison of trust, anxiety, and blame attribution between the two groups raises interesting questions about the gaps in public knowledge relating to Artificial Intelligence technology. Furthermore, the values of human dignity and anxiety are recurring themes that must be addressed in ensuing discussions around AWS and AI.

The second article, by Campanozzi et al., furthers the exploration of AI in the context of social robotics. The manuscript reports on the outcomes of a pilot survey of student perceptions of trust in social robots when integrated in everyday life, which yielded 1213 valid responses. Of particular significance in this article is the rejection of anthropomorphization of robots for any roles involving direct human contact, once again highlighting the effect of The Uncanny Valley, but also emphasizing the lack of trust in areas where robots are expected to assist humans in their roles. Importantly, of the 14 application areas presented to participants, only 6 future scenarios were perceived favorably.

In the article by Singh et al., the delegation of decision making to algorithms is considered through the use of a sports case study. The authors deliberate on the the assignment of difficult decisions in sports to a computer algorithm, presenting the SmartFlag concept as a technological solution that promises benefits to stakeholders, such as the removal of human error. However, the authors warn that “algorithmic definition is not necessarily a definite good.” As such, our discussion must carefully consider the complex nature of certain decisions that cannot be reduced to a binary, in conjunction with the diminishing of certain social activities and responses resulting from the deployment of such systems. The paper further notes the importance of adhering to the precautionary principle and the necessary exercise of examining the social impact of decision-making technology in sport, particularly when this technology will inevitably be repurposed for secondary uses, which may result in profound impacts in the way humans engage with these systems.

From artificial intelligence and decision support systems and their use in a range of settings, the next two articles investigate non-conventional HCI and ethical considerations regarding specific socio-technical systems in the context of emergency and disaster management. For instance, Shipman explores the ethical challenges associated with the design of infrastructure that is informed by human behavior and pedestrian dynamics. While the author identifies the opportunities this introduces, from supporting individualized experiences and personalization in smart cities, to facilitating enhanced evacuations during hostile or natural emergencies, the ethical risks are also emphasized. The need for industry self-regulation is evident, as is the importance of researchers and modelers exhibiting awareness of the relevant ethical challenges.

Munoz similarly focuses on extreme situations, concentrating on the use of microgrids for disaster management. The author specifically explores the latent value of microgrids and their potential use in humanitarian relief supply chains and disaster management scenarios. The concept of antifragility is examined, highlighting that microgrids only demonstrate their merit in extreme events, thereby challenging traditional notions of risk management and ethics. These notions typically privilege the concept of loss minimization rather than latent value, the latter of which is difficult to express to relevant stakeholders, but regardless should be deliberately designed and integrated into complex socio-technical systems such as microgrids.

The final article demonstrates a large-scale application of HCI by providing an overview of the evolution of China’s Social Credit System. Trauth-Goik posits that surveillance scholarship has largely been concerned with the Western instituted surveillance systems, largely ignoring the non-Western context. The author aims to fill this gap by elaborating on China’s desire to develop and promote a “culture of honesty and integrity,” through the deployment of a rapidly advancing surveillance system. Furthermore, the paper calls for the reconceptualization of the Social Credit System to the Social “Trust”’ System; trust being a common and underlying theme throughout this special issue.

In general, this special issue encourages discussion of a number of questions related to the ethics and regulation of digital technology. It examines important considerations moving forward in the interest, and to the benefit of, the HCI domain and individuals/society at large. The preliminary questions are: 1) how do we manage, communicate and deliberately design and embed ethics into existing HCI approaches, given the diverse nature and reach of the presented socio-technical systems?, and 2) what regulatory approaches are suitable given this complex socio-technical landscape? These questions should be considered with a great deal of urgency and with a human orientation and emphasis, as we clearly need to reinforce the crucial point that the “human” (and not the “computer”) should be the focus of future HCI research.

Guest Editor Information

Roba Abbas is with the University of Wollongong, Australia. Email: roba@uow.edu.au.

 

Stephen Marsh is with the Univer­sity of Ontario Institute of Technol­ogy, Canada. Email: Stephen.Marsh@uoit.ca.

Kristina Milanović is with Imperial College London, United Kingdom. Email: km908@ic.ac.uk.

 

To read the complete version of this article, including references, click HERE.