The Need for Public Interest Technology

By on September 7th, 2021 in Artificial Intelligence (AI), Editorial & Opinion, Ethics, Human Impacts, Magazine Articles, Social Implications of Technology, Societal Impact

There are many domains of human endeavor that invoke the “public interest,” for example, environmental sustainability, law, journalism, and, perhaps most pointedly in 2020–2021, health. All of these domains require some sort of tradeoff between different and potentially competing stakeholder priorities. For example, public interest in environmental sustainability, with respect to air quality, potable water, and arable land, can be in contention with requirements of manufacturing, transport, and consumer demand. Advocacy for social justice through public interest law might set a disadvantaged or disempowered group against a privileged or powerful one [1], [2]. Similarly, journalistic reporting in the public interest must consider holding the powerful accountable for their actions and decisions and the potential impact on society against basic rights to privacy and ethical practices in investigative journalism. Choices in public health sometimes appeal to the concept of procedural justice and can involve a multiperspective tradeoff between individual risk and collective benefit, personal preference and state mandate, financial costs and effectiveness of treatments, and speed and caution (and note these are tradeoffs not false dichotomies, as some would have it).

These examples show that, however abstract, multifacetted, and difficult to define in precise terms, there is a socially constructed concept [3] of “public interest.” Furthermore, social activity can be directed toward attempts to produce, promote, or optimize a particular facet of that concept, such as social justice, common knowledge, or collective well-being. The relative success of such activity can be determined according to a metric, or system of metrics, that might be used to measure the performance, level, or quality of that facet, for example, in social justice: Gini index, relative poverty metrics, and inclusivity; in journalism: accountability and democracy index; and in health: life expectancy and quality-of-life index.

There is a socially constructed concept of “public interest.”

This social activity may also require collective action [4], that is, the coordinated behavior of a sufficiently significant proportion of a group in order to achieve a desired outcome relative to the applicable metrics. To do this at scale, some form of knowledge alignment may be required [5], [6]: a group might define policies —which are themselves, of course, socially constructed—as a mechanism to realize coordinated behavior. As a consequence of such “application pull,” technologies are developed to help implement the policies that are intended to advance the public interest.

It could be argued that a common assumption underlying the collection of papers in this special issue is that some facet(s) of public interest can be specified as “supra-functional” requirements distinct from functional and nonfunctional requirements. Systems can then be designed, for example, using methodologies like value-sensitive design [7]–[8][9] and operationalized [10], [11], to produce socio-technical solutions to support or complement policies that address, for example, environmental sustainability, social justice, or public health. Such systems are then deployed in order to promote the public interest or enable users to act (individually and at scale) in a way that is in the public interest toward individual and communal empowerment. Such deployment needs to be sensitive, though, to a kind of “social quantum effect:” adding a technology to a social system (and observing and measuring it) changes the nature of the system itself, often with unintended consequences [12]. Software engineering methodologies for socio-technical systems, especially those using artificial intelligence, currently lack a deep understanding of the sociological, ethnographical, and anthropological science underlying public administration and public sector work [13], but some technology developers do have an effective grasp of behavioral psychology [14], [15].

Citizen Assemblies and Performative Governance

In conjunction with being a social construct that is difficult to define, the use of behavioral psychology in technological development exposes an exploitable weakness, whereby the concept of public interest is amenable to manipulation, in its “definition,” “actualization,” and “metrication.” But, as has been discussed, there are no absolute answers to who determines what the public interest is or who is to decide what is in the public interest, or even who is the gatekeeper or guardian of the public interest. Unsurprisingly then, the term “public interest” has, like the terms “social capital” and “public school”1 been misrepresented, somewhat hijacked, by private interests who repackage their own interests as the public interest itself.2 Consequently, some economists, sociologists, and political scientists use alternative terms, preferring, for example, to talk of “conceptual resources” rather than “social capital.”

Software engineering methodologies for socio-technical systems, especially those using artificial intelligence, currently lack a deep understanding of the sociological, ethnographical, and anthropological science underlying public administration and public sector work.

As a result, and especially in countries led by populist leaders (whose electoral appeal may be based on polarization [16], [17] through false narratives, fictive scaremongering and the relentless promulgation of “alternative facts”), not just technologies but also policies may be designed and intended to create distortion, diminution, distraction, or division of the “public interest.” This may be achieved either covertly, through the enactment of laws designed to entrench established power relationships or to advance oligarchic interests, or more overtly, through programs whose only intention is performative, that is, national gaslighting through policies which are intended to create headlines and stoke division, rather than deal with difficult evidence-based choices or costly investment in solutions which might actually address a wicked problem. This is particularly evident in criminal justice, immigration, education, climate change, and, of course, public health.

Performative governance of this kind can also have a restrictive, even malicious, influence on technological development. For example, the recent U.K. Elections Bill [18] proposes a form of what might be called n -tuple entry bookkeeping. This is because the bill imposes financial constraints on any organizations involved in plans which “can reasonably be regarded as intended to achieve a common purpose,” since the total expenditure is to be counted toward the expenditure of each and every organization that is party to that common purpose. For a moment, leave aside the usual criticism of poorly drafted legislation, for example, the introduction of the ill-defined and purely subjective word “reasonable” into a legal context, and even leave aside the tendentious intent to disempower opposition and entrench the drafter’s own political authority [19]. More disconcerting is the implicit animosity toward collective action, in general, and how this provides a façade for a shoulder-shrug approach to public policy: “we have to learn to live with” a pandemic, climate change, and so on. Most disconcerting is how this kind of legislation could have a chilling effect on the numerous socio-technical initiatives to advance collective action through online citizen assemblies [20]–[21][22][23]. If the use of such technology can be designated as achieving a “a common purpose” and therefore be subject to financial restrictions, then it will compel many organizations to rethink their involvement and engagement with such a potentially valuable PIT. Simply, this is not what we want.

Needs of PIT

In summary, two claims have been advanced. First, that there is a social construct in the form of an intrasubjective agreement formed through interaction called “the public interest.” Second, that there are actions, behaviors, and processes performed by individuals acting in a professional capacity, such as lawyers, journalists, teachers, doctors, technologists among other professionals, that are in, and can promote, or can improve this “public interest.” Accepting these two claims, it is then pertinent to ask if there are actions that can be pursued which are inimical or deleterious to the “public interest.” These could be, for example, those that are harmful to collective action, like poisoning the well of common knowledge, or actions that are harmful to social justice, environmental sustainability, and public health and wellbeing.

There are no absolute answers to who determines what the public interest is or who is to decide what is in the public interest, or even who is the gatekeeper or guardian of the public interest.

There are two observations that follow from this. The first is professional: while numerous careers and occupations, as already noted, seek to promote the public interest, they also have formal associations specifying, reviewing, and accrediting standards of ethical behavior. Certainly, we need much better than shabby, shady performative governance by sado-populists [24] who abnegate responsibility and eschew accountability. The second is linguistic: although, as previously discussed, the phrasing “promote the public interest” or “to be in the public interest” is understood, the negation is not so readily available. Other than saying something is “ not in the public interest” (so it might yet be in someone else’s interest), there is no expression to label behavior that is actively and deliberately inimical to the public interest. We can talk about doing something for or in the public interest; however, it seems challenging and less likely that we are able to talk about doing something against the public interest.

The corollary of these observations—if we are to address existential threats and develop socio-technical initiatives in the public interest—is the requirement for a transdisciplinary scientific program. This program would serve, in the first instance, to develop better language—and it is not just that language matters [25], but social construction through repeated interaction is not—cannot be—a one-sided effort, but a bidirectional one, requiring appropriate language, translation, and fluency [26]. It would also seek to hold relevant professional groups accountable for their actions, regulations, and policies, specifically in cases resulting in sustained damage to and violation of the (collective, humanist definition of) public interest. We need socio-technical design endeavors that promote and encourage collective action, removing restrictive policies that are demoralizing, prohibitive, and ultimately counter-productive. The solution lies in empowering, as opposed to restricting, citizens through embracing and advancing the idea of citizen assemblies and deliberative sessions that allow for challenges and issues to be identified and scrutinized in a collaborative manner, ideally resulting in socio-technical and other recommendations that generate the desired outcome(s) and are representative of what really should be understood as “the public interest.”


Author Information

Jeremy Pitt is a Professor of Intelligent & Self-Organising Systems with the Department of Electrical & Electronic Engineering, Imperial College London, London, U.K.
Prof. Pitt is a Fellow of British Computer Society (BCS), the Institute for Engineering and Technology (IET), and a member of IEEE. He is currently the Editor-in-Chief of IEEE Technology and Society Magazine.
Katina Michael is a Professor at Arizona State University, Tempe, AZ, USA, holding a joint appointment with the School for the Future of Innovation in Society and School of Computing and Augmented Intelligence. She is also the Director of the Society Policy Engineering Collective (SPEC).
Prof. Michael is the Founding Editor-in-Chief of the IEEE Transactions onTechnology andSociety.She is a Senior Member of IEEE and a Public Interest Technology advocate who studies the social implications of technology. In 2020, she became the Founding Chair of the first Masters of Science in Public Interest Technology degree in the world. In the same year, she also received the ICTO Golden Medal for lifetime achievement award for exceptional contributions to research in information systems.
Roba Abbas received the Ph.D. in location-based services regulation.
She is a Lecturer and Academic Program Director with the Faculty of Business and Law at the University of Wollongong, Wollongong, NSW, Australia. She has received competitive grants for research addressing global challenges in areas related to co-design and socio-technical systems, operations management, robotics, social media, and other emerging technologies.
Dr. Abbas a Co-Editor of the IEEE Transactions onTechnology andSociety.


To read the full version of this article including references, click HERE.