Against Modern Indentured Servitude (“I’m Spartacus”)

By on July 12th, 2022 in Articles, Artificial Intelligence (AI), Editorial & Opinion, Ethics, Human Impacts, Magazine Articles, Social Implications of Technology, Societal Impact

Jeremy Pitt and Maria Tzanou

 

The First International Workshop on Artificial Intelligence for Equity (AI4Eq) was held virtually in Santiago de Compostela, Spain, in conjunction with ECAI2020 [1]. The focus of the workshop was to examine the role of artificial intelligence (AI) science and technology in advancing toward the United Nations (UN) Sustainable Development Goals, particularly in the context of a mix of standards, policies, regulations, declarations, guidelines, and charters on the ethics of AI, and “trustworthy” and “responsible” development of AI. A particular theme emerged identifying pressing issues of human rights, personal safety, and modern slavery.

The second edition of the AI4Eq Workshop, held in association with IEEE ISTAS2021, the Flagship Conference of the Society for Social Implications of Technology (SSIT), aimed to analyze more deeply the emergent theme of rights, safety, and modern slavery but twisted the plane of perspective. This twist involved, rather than taking a top-down, institution-led policy-oriented approach, taking a more bottom-up, values-driven, people-centric approach. This makes the central question of system codesign (between the developers and the users) less about the ethics of the developers mandated by universal declaration and more about the local empowerment and lived experience of the users, and defining the opportunities, boundaries, and guardrails that determine minimal and maximal rights to self-organization and self-determination. Within those minimal and maximal rights, we can then focus on issues of social justice, in particular, the empowerment of marginalized communities and addressing asymmetries of power within the Digital Society itself.

The First International Workshop on Artificial Intelligence for Equity identified pressing issues of human rights, personal safety, and modern slavery.

In particular, this asymmetry of power within next-generation socio–technical systems, especially those involving AI, raises the prospect of an insidious threat that needs urgent attention. The issue of modern slavery has been identified by the UN as one of the first human rights issues to arouse widespread international condemnation, yet it persists today and slavery-like practices continue to present a serious and pernicious problem [2]. This special issue, as the proceedings of the Second AI4Eq Workshop, focuses, in particular, on what has been called modern indentured servitude.

Indentured servitude is a form of labor in which one person is contracted to work for another until a debt is paid off: in its less unreasonable forms, it can support apprenticeships where students can learn a trade from a master professional; in its more dubious forms, it can act as a social filter by ensuring that access to desirable professions can only be achieved through unpaid internships that only the already wealthy can afford to accept, and at its very worst form, the size of the debt rises faster than the ability to repay it and the indebted party can never escape. This workshop aims to examine digital versions of indentured servitude, that is, forms of indentured servitude that occur in the Digital Society, deliberately or as a degenerative unintended consequence, covertly or overtly, and/or because of an unethical misuse of AI. Such degenerative consequences include, but are not limited to, surveillance capitalism [3] and techno-feudalism [4]–[5][6].

The second AI4Eq Workshop took a less policy-oriented, more bottom-up, values-driven, people-centric approach.

The workshop was organized around two invited speakers and four panel sessions addressing interconnected streams: 1) senior lived experience; 2) youth lived experience; 3) well-being; and 4) ethics.

Josiah Ober’s invited talk presents Aristotle’s wrestling with the point of existence being the pursuit of human flourishing (eudomonia) and how this is incompatible with slavery, and how he tied himself in unscientific knots attempting to reconcile the two. However, some of those knots are apparent justifications for the mistreatment of people in the same way 2,500 years later. Katina Michael’s invited talk demonstrates this in graphic and tragic detail, showing how rapid technological progress, dispassionate commercial imperatives, and ineffective regulation have wreaked havoc on the business of urban transportation, which has devastating consequences for real and all too often already marginalized people.

On the theme of senior lived experience, Abbas and Michael discuss technological solutions to managing dementia and the risk of amplifying marginalization and loss of personhood. They advocate using a socio–technical codesign methodology that emphasizes values such as safety, well-being, and self-esteem in healthcare systems development to treat medical conditions such as dementia. Miller develops this theme further with a discussion of caregiving robots as a therapeutic tool to address isolation, but warns of the risks associated with not respecting boundaries (e.g., with respect to personal and sensitive information) that are normative and socially constructed and not necessarily obvious to a robot. Pittinsky develops the theme further by highlighting a widespread failure to acknowledge what it essentially means to be a carer and to be cared for and discusses how technology can elevate convenience over empathy when it needs to augment rather than supplant human capabilities.

Degenerative consequences of the Digital Society and AI may include, but are not limited to, surveillance capitalism and techno-feudalism.

At the other end of the age spectrum, Mertzani and Pitt look at the role of social influence on cognitive development and “growing up” in the context of the normalization of surveillance capitalism. They argue that, historically, legislation has been a tool for the protection of children and children’s rights, so if legislation can be used to stop children from being coal miners, it should be possible to pass legislation that stops them from being data mines. Vasalou considers how automation in digital learning technology is often associated with the idea that learning and teaching will inevitably be transformed for the better, but there is no guarantee of this outcome. She argues that situated research approaches are needed to identify the current role these technologies are playing in schools. Such approaches can help to understand how schools appropriate technologies, the role they occupy in everyday schooling, and how these technologies fit within mundane aspects of school practice. Ideally, they would also augment creativity and critical-thinking skills and avoid the production of examination fodder by the stolid acquisition of prescribed facts or a singular narrative through rote learning.

In the more general social context of wellbeing, Dannhauser argues that embedding psychological manipulation techniques in digital technology (such as mobile phones, computer games, and virtual assistants) ensnares—even psychologically enslaves—attention and directly or indirectly leads to energy theft, the energy that could otherwise have been expended on socially productive purposes or the pursuit of human flourishing (as discussed by Ober, earlier). Perakslis extends this argument from psychology to physiology and questions the long-term effect of chronic over-activation of the human sympathetic nervous system causing people to exist (or rather perhaps, subsist) in a permanent state of stress, or allostatic A-load (an index of the biological wear-and-tear on the physiology of the human), and outlines the many debilitating effects this can have. Perakslis proposes that we have to perceive and understand the narratives of entitlement that allow stressors to infiltrate and over-take our physiological existence and take back control of our own brains. Rychwalska picks up on both themes of empowerment and social influence to discuss the possible psychological processes that entice digital media users to accept their role as data sources. She identifies dark patterns —interface and service design that prompt suboptimal choices—as a particular problem that occurs as a consequence of data aggregation on an unprecedented scale. But, we have to “see through this glass darkly,” to avoid amplifying societies’ most self-destructive characteristics, for example, the scientific enlightenment that provided the foundations for this technological development in the first place is under threat from the smorgasbord of alternative facts and theories that are offered on social media.

Dannhauser argues that embedding psychological manipulation techniques in digital technology ensnares—even psychologically enslaves—attention.

In the final quartet of papers, the question of ethics is revisited in the context of modern indentured servitude. First, Gardner explores the concepts of responsibility, recourse, and redress that should, but frequently do not, enable citizens to exercise their rights and to be fairly treated. Gardner argues that these concepts are severely neglected in relation to AI ethics’ principles, standards, and regulations, and we need citizen-centric design in relation to the development of AI systems and related AI regulation so that when “things” go wrong—as they inevitably will—citizens are equally empowered and enabled to seek recourse and redress. In many situations, “freedom to” is as important as “freedom from”: rights are meaningless in the absence of the ability to exercise them. Second, Lively argues that thinking about the ethics of AI (and the ramifications for selfhood and society brought about by technologically enabled modes of modern indentured servitude) implies thinking about the future. But, her article shows the present is the time to bring about a more “futures literate” approach to human-centered socio–technical systems design in the context of ethical AI futures. Third, Carmel and Paul explore the EU narrative of “European AI” and intelligent technology and suggest that the “Made in Europe” label implies better, safer, and more trustworthy is aligned with a “myth” that European market-making and integration were focused on peace and prosperity. They argue that this perpetuates suppression of the acknowledgment that this myth is founded on a political economy that exploits colonialism. This is not necessarily the socio-political global economy that we want, need, or deserve. Finally, Tzanou examines the AI datafication of vulnerable social groups such as poor people and women She argues that only if we are attentive to the inequalities that the most vulnerable face, AI could make a significant contribution to addressing social problems in the future.

Each of us needs the courage to be Spartacus.

In summary, the term “modern indentured servitude” did not originate with this workshop, but we hope that this special issue has highlighted many of the different shapes and processes it can take, some more insidious than others. We would like to think that, if each paper could talk, they would get up one after the other and say, “No, I’m Spartacus.”

In these dark times, each of us needs the courage to be Spartacus.

ACKNOWLEDGMENT

We would particularly like to thank the Conference Chair of ISTAS’21, Heather Love, for her help and support in running the AI4Eq-2 Workshop in association with the conference, and Rui Cardoso, Kristina Milanovic, and Asimina Mertzani for organizational support and website maintenance. Thanks also to all the speakers and workshop attendees.

Some historical information about Spartacus and his leadership of a slave rebellion against the Roman Republic can be found at https://en.wikipedia.org/wiki/Spartacus. The Hollywood version, and explanation for “I’m Spartacus,” can be found at https://www.youtube.com/watch?v=-8h_v_our_Q (both links last accessed May 2, 2022).

Author Information

Jeremy Pitt is a professor of Intelligent & Self-Organising Systems with the Department of Electrical & Electronic Engineering, Imperial College London, London, U.K. He is a Fellow of the British Computer Society (BCS) and the Institute for Engineering and Technology (IET), and a member of IEEE. He is the Editor-in-Chief of IEEE Technology and Society Magazine.

Maria Tzanou is an associate professor in law with Keele University, Staffordshire, U.K. She also acts as a permanent scientific advisor to the Greek Ministry of Justice on data protection issues and co-convenes the U.K. Society of Legal Scholars (SLS) Cyberlaw Section. Her research focuses on European constitutional and human rights law, privacy, data protection, AI, big data, surveillance, and the inequalities of data privacy law and how these affect vulnerable groups.

______

To read the complete version of this article, including references, click HERE.

 

______