Hello Automated Empathy

By on March 10th, 2021 in Articles, Ethics, Social Implications of Technology

For better or worse, we have become familiar with the idea that technologies profile people to deliver a service of some sort. An art-house film enthusiast may find their Netflix recommendations eminently agreeable. Profiling of children’s negative emotions to serve ads is perhaps less acceptable. Networked societies are shifting towards a situation where data about our bodies inform profiling and algorithmic decisions. This has pros and cons.

The significance of human-state measurement is that it not only says something about the physical health of a person, but also of their emotional and psychological disposition. The extent to which physiological behaviour correlates with emotional and intentional states is hotly debated, but what is clear is that we are moving to a social scenario of human behavioural profiling based on bodies, rather than just screens, clicks and likes.

This increasingly takes the form of emotion recognition, a weak form of AI that pertains to read and react to emotions through text, images, voice, computer vision and data about the body. Such ‘Emotional AI’ systems can see, count, compute, cluster, react and output. Key sites for use of emotion recognition are education, security, policing, retail, advertising, social media, workplaces, health, automobiles, consumer items (such as toys), public transport infrastructures, and other situations where it is valuable in any way to understand individual or group emotion.

When emotion sensing tracking was new

For those unfamiliar with terms such as “affective computing,” emotion recognition may seem novel. However, it really is not as modern emotion sensing has strong roots in the 1800s. For example, in 1865 Claude Bernard (recognised as the father of modern physiology) provided an account of the heart as both an object (pump) and psychological phenomenon. In the 1870s neurologist Duchenne de Boulogne created a taxonomy of facial expressions that informs modern computer vision techniques. Wilhelm Wundt’s 1880s lab-based studies used measuring devices to track correlations between physiology, thought and emotion. Throughout the 1900s, emotions would continue to be mediated, visualised and represented in tables and diagrams, shifting emotion from an ineffable status to something thought to be quantifiable.

Social significance

Yet, today, something else is afoot. When assisted by technologies that can turn human-state signals into fungible electronic data, identify patterns in small and large datasets, apply and test rules applied in one situation to other situations, and when this can be done increasingly cheaply, this provides for hitherto unseen scale. This portends nothing less than the automated industrial psychology of emotional life. Hyperbole? We’ll see.

Socially, some of this is occurring largely unnoticed, such as wearables that passively track bio-signals and make inferences about affect and disposition. Less obvious sectors are also taking an interest, perhaps foremost the automotive sector (that has an interest in reducing road deaths through in-cabin cameras and sensors), education (that is already in process of datafication), child objects (such as “emo-toys” and wellbeing trackers), advertising and retail (that functions through emotion), workplaces (to save on recruitment and for performance management), and more.

The simplest way to thinking about it is that for any situation where there is personal, communicative, entertainment, economic or surveillant value in emotion, emotional AI and empathic technologies are likely to be applied, sooner or later.

Flaws and concerns

At the Emotional AI Lab we study the social impact of these technologies, asking basic questions such as: do they work; what are the consequences for people; and what do citizens think of the promises of these technologies?

Through UK national surveys and focus group work, we find disquiet about the premise of using data about emotion. For example, our UK-wide survey conducted in 2020 of emotion tracking use cases that are popular now (political advertising on social media) or are likely to be more popular in the next few years (biometrics in smart advertising, schools, workplaces, cars) finds discomfort with the premise of emotion tracking. Interestingly, the highest levels of concern are for use of data about emotions for political advertising (66% outright rejection). The next highest point of concern was tracking in the workplace (emails, cameras and voice) with 58% outright rejection. Other cases hovered around this outright rejection mark. These numbers suggest citizen disquiet and mistrust of emotion recognition.

Effectiveness: is the solution worse than the problem?

Does emotion recognition work? For social scholars of technology the general answer is “no,” but this is not straightforward because what emotions are is keenly debated. That said, it is uncontroversial to suggest that emotions are an outcome of brain–body–environment interactions, giving emotions an emergent quality. This means that they are not reducible to parts of the brain, behaviour or environments. Rather, an emotion is a state that follows having evaluated (or ‘appraised’ to use jargon) a situation relevant to a person’s concerns.

This makes recognising emotions more complex than, say, being able to label a facial expression. After all, clearly we are not always happy when we smile. Perhaps the answer is more data about a person’s history, their physiology, where they are, who they are with, what they are doing, whether is a task to be achieved, and so on. This would improve accuracy, but even then this has scope to be internationally problematic. For example, in Japan, the workplace analytics firm Empath counts sorrow as a basic emotion. Emotions themselves, as well as expressions, are subject to regional and ethnocentric variation.

Is this OK?

I do not think there is a binary yes/no answer. As is often the way, “it depends.” A game for example that allows biometric inputs to enhance game play and ensures that all the clever processing happens on a local device (at the “edge”) has a lot of promise. Use in workplaces or schools to surveill and make judgements about performance has less appeal (and I say this as a smiley person). Overall, is it worth it? For now my emotion is, pensive.

About the Author

Andrew McStay is Professor of Digital Life at Bangor University, UK. His most recent book, Emotional AI: The Rise of Empathic Media, examines the impact of technologies that make use of data about affective and emotional life. Director of The Emotional AI Lab, current projects include cross-cultural social analysis of emotional AI in UK and Japan. An IEEE SSIT member, non-academic work includes standards development work for P7014 and ongoing advising roles for start-ups, NGOs and policy bodies.

 

This article represents the author’s opinion. Published by Miriam Cunningham