AI vs “AI”: Synthetic Minds or Speech Acts

By on June 10th, 2021 in Artificial Intelligence (AI), Editorial & Opinion, Ethics, Human Impacts, Magazine Articles, Social Implications of Technology, Societal Impact

By Peter R. Lewis, Stephen Marsh, and Jeremy Pitt

 

The observation that the COVID-19 pandemic has disrupted workplace relationships and working practices is trite; it is nonetheless true. One significant change has been that the massive increase in call-center employment in the past 20 years has been mirrored during the pandemic by a corresponding increase in remote working or working-from-home. However, the call-center sector is, at least anecdotally, characterized by pressurized, target-driven completion of routine tasks often performed by temporary, over-qualified personnel with little personal control, investment, and engagement. Consequently, manager–employee relationships are often strained and, in particular, mistrustful.

Taking their cue, perhaps, from current U.K. government ministers who asserted, with scant evidence, that British workers “are among the worst idlers in the world,”1 one of the world’s biggest call center providers has proposed to address this unstable and asymmetric relationship by monitoring staff performance through webcams on home computers. The cameras would then feed into an “artificial intelligence system” that would scan, supposedly at random, to determine if an employee was violating working rules during a shift [1].

Just as the “autonomous” in lethal autonomous weapons allows the military to dissemble over responsibility for their effects, there are civilian companies leveraging “AI” to exert control without responsibility.

About such surveillance; well, put it this way, the idea does not appeal. However, it is the use of the phrase “artificial intelligence” that particularly concerns us here. To be frank, the way it is presented to the workers is an abuse of both language and a concept, in the pursuit of abusing an asymmetric (workplace) power relationship, relationships which persist in the AI sector too [2].

By abuse of language and abuse of concept, we need to distinguish between artificial intelligence (AI) as a technology with practical application, and “artificial intelligence” (“AI”) as a speech act with conventional force.

As a technology, AI exists somewhere on a spectrum from, practically, at one end, expert systems, path planners, and practical reasoning systems (i.e., algorithms that can “do” relatively simple things that some naturally intelligent entities might be expected to “do”) through to, theoretically, at the other end, Alan Turing’s “imaginable digital computers which would do well in the imitation game” [3] or John Haugeland’s synthetic intelligence [4] (i.e., machine intelligence that is constructed but not necessarily imitative (compare an octopus’s distributed brain); hereafter, we will refer to such intelligence as synthetic minds).

Alternatively, “AI” as a speech act is a social constructor that stems largely from science fiction with computers and robots having hugely overblown capabilities and a tendency to the apocalyptic. The problem in the call center scenario is that a relatively simple technology at the lower end of the AI spectrum is being packaged as something at the upper end of the AI spectrum by willfully leveraging preconceptions and misconceptions of AI competence through the speech act of “AI. ”

Therefore, people have been, and are being, “encouraged” to think about artificial intelligence wrongly [5]. The results of thinking about it wrong are all around us, and not much of it is “good.” Just as the “autonomous” in lethal autonomous weapons allows the military to dissemble over responsibility for their effects, there are civilian companies leveraging “AI” to exert control without responsibility. And there are tech giants and others who are happily taking advantage of the gaps that this creates.

When they are using marketing speak, those who claim “AI” can be seen as “trustworthy” also claim that it is “beyond the control” of its creators when it leaves the shop floor.

And so we arrive at “trustworthy AI” because, of course, we are building systems that people should trust and if they don’t it’s their fault, so how can we make them do that, right? Or, we’ve built this amazing “AI” system that can drive your car for you but don’t blame us when it crashes because you should have been paying attention. Or, we built it, sure, but then it learned stuff and it’s not under our control anymore—the world is a complex place.

The problem of “trustworthy AI” is one that has great many different “sides.” On the one hand, there are guidelines (for example, from the EU) that tell us how AI should be built and/or behave in order to be seen as “trustworthy”—presumably this means that people are going to (should? must?) trust it. On the other hand, the problem is seen as “We shouldn’t have to trust AI” because it is a “made thing” and, since it is a human artifact, humans should be held responsible (accountable) when it does something wrong. As we’ve already noted, in many cases, when they are using marketing speak, those who claim “AI” can be seen as “trustworthy” also claim that it is “beyond the control” of its creators when it leaves the shop floor. Usually, it’s the creators who claim this, by the way. This is an unacceptable attempt to evade responsibility by people who would rather not be held accountable. There are many, but the usual suspects would seem to be prevalent—we can call them big tech, for the sake of argument, but they need not be big or indeed tech. As Mayor [6] notes “artificial devices” have always been used by those who would seek power over others (and of course, to the detriment of their creators, in the end).

As Mayor notes, “artificial devices” have always been used by those who would seek power over others.

It’s not just an evasion of responsibility; it is an exercise in power and it is profoundly wrong.

There’s another hand (sure, we have two, but we can borrow one). There are also those who believe that there is a need to consider how we treat AI, and think about it because doing so holds a mirror up to our own humanity. Perhaps even more importantly, it prepares us for a time when we may well need to worry about how we used to treat it. To our shame, we have plenty of examples through human history of how we treated those that looked different from us. These have never been shining examples of the goodness of human nature. Unlike the powerful position Columbus, Drake, and others were in during the “discovery” of new lands and peoples, we may find ourselves in a somewhat less powerful position with respect to a synthetic mind should we one day all be unequivocally satisfied that one has appeared.

That reference to “synthetic mind” was deliberately chosen.

It is important to be clear here. Seeing a future where such a thing may come to pass is emphatically not the same as seeing a present where those who create “trustworthy AI” should not be held responsible for its actions. Of course, they should be.

However, unfortunately, it’s also not quite that simple. There can be a debate here about whether we could or would ever produce something sufficiently holistically mind-like to satisfy even the strongest critics of AI. It is possible that we don’t have enough time left to find out. It is also rather easy to point the “idealist” finger at those who (choose to) believe that we may. And there is, as alluded to earlier, a substantial difference between AI, in the form of Turing’s “imaginable digital computer that would do well at the Imitation Game,” and AI as a basic problem solver or classifier implemented at a hackathon.

But there is a sense of a moving target here. In one conception of AI, Rich and Knight [7] define it as “the study of how to make computers perform things that, at the moment, people do better.” This definition serves as a metaphorical carrot or a stick, to encourage us to make computers that can do ever more mind-like things. It also goes some way to explaining why, once we’ve figured out how to do something mind-like perfectly well with a machine, and it is so common as to no longer be surprising, we are tempted to start claiming that that machine is “not really AI. ” As an aside, it does not seem at all obvious to us that a hypothetical future (synthetic or alien) mind would be in principle unable to bear responsibility simply by virtue of it having being synthesized rather than evolved (what would the defining characteristic be?). In considering the task of trying to create a synthetic machine coextensive with the human mind, it is clear to see that in many aspects we are so far away from this: we do not even have all the right questions to ask yet. In many other aspects, however, we have already wildly surpassed it.

Yet, philosophically interesting and morally essential as this discussion is, it can also be used as a weapon of mass distraction. To return to and ram home the distinction identified earlier: There is AI as synthetic mind, the largely yet unrealized potential to create mind-like machines, and there is “AI” as marketing, as a speech act, used to evoke a perception of some technology. There is a manifest difference between what AI is marketed as and what AI is, could, and perhaps should be. The problem with the current debate around “trustworthy AI” is that it conflates “AI” as marketing with AI as (usually a potential future) synthetic mind. This is disingenuous or misinformed. Possibly it is both. Take your pick.

Let us be more concrete: In the article referred to above [1], the word (acronym) “AI” was used more than once, to describe a system that would watch individuals by a camera and alert supervisors if, for example, they were using a phone, or away from their desk. Perhaps the most important observation here was that, whilst there is a reference to “AI,” in fact what is being done is the ascription of intelligence to a system using classification techniques to deal with a large amount of video data (and quite possibly things like keyloggers). In other words, from the computer science point of view: signal processing, pattern recognition, classification, and rule checking. “Intelligence” is a little too extreme a word to ascribe to these activities. Indeed, if this monitoring task was given to a human, we might describe it as rather “mindless.” This is not to say that there isn’t mind-like something happening, but words have consequences. The way they are used has power.

Ascribing intelligence to such tools reinforces disempowerment and devaluation of the human based on expectations. It also reinforces the asymmetric power balance between bureaucracy (which inherently makes and expects rules to be followed) and humanity (or intelligence) shown by the “workers.” As David Graeber might have noted, this is the epitome of the Utopia of Rules combined with Bullshit Jobs [8], [9].

The term “AI” is used by those who would deploy it, here and elsewhere, in order to reinforce the preconceived ideas that we have as humans and skew the resultant experience away from human and toward rule. It is a power speech act. It is clear, on reflection, that the usual suspects referred to above already know that this is the case. The way in which “AI” is used today is proof enough of that.

If this is the case, and “AI” (as marketing) is a speech act intended to create or reinforce an asymmetry of power, at which point should it be trusted ? (This was sarcasm). Indeed, as a speech act, “AI” is no more than a proxy for the people who deploy it at the expense of those who are subjected to it. It serves very well as an exercise in deflection and distraction: having us “trust” the “AI” and see it as different and separate suits the people who deploy it because it puts them at least one remove from accountability.

In a similar example, Walmart in the U.S. and ASDA in the U.K. have been rolling out what some have dubbed “AI janitors” to replace cleaning staff in their stores.2 This may be a perfectly sensible application of robotics, but again the use of “AI” for such a role, here largely by the media in reporting it, is curious. We might venture to suggest that firms hiring cleaners do not have “intelligence” high on the list of requirements for potential staff, though ironically they may have “trustworthiness.” Thankfully, Walmart’s VP was more restrained, announcing simply that “BrainOS is a powerful tool in helping our associates complete repetitive tasks so they can focus on other tasks within role and spend more time serving customers.”

We suggest that it would be hard to see that there isn’t something intentionally evocative about software labeled “BrainOS. ”

All of our vaunted “AI” technology is, at least plausibly, artificially intelligent, in that it embodies in some way “computers that do the sorts of things that minds can do” [10, p. 1]. But words have consequences, and the use of “AI” as a speech act can be dangerous. To be clear, this is not about the rightness or wrongness of people trusting AI or some other automaton since they almost certainly will anyway, just as they do with all manner of artifacts, animals, and various others, regardless of whether others think they ought to. Is our response to this conundrum to tell them that they are wrong?

We conjecture that it is in talking of trusting “AI” (as speech act) that is the real problem because it removes accountability. Regardless of any promise for the future, we need to remember that we as creators, sellers, and marketers are in control of “AI,” the marketing tool. Moreover, we have a choice whether to market a product in this way, when to use or refrain from using the term in order to bring about a perception of a machine by a (usually less powerful in the relationship) human being.

What it comes down to is less trustworthy and more democratic (or democratising) AI. It is indeed possible and important to make “AI” democratic if it is the power speech act. And democratic AI is possible too: it is transparent about its strengths, weaknesses, and creators. It provides a way for everyone to know what it actually is, why it is being used here and now, when it is supposed to be used, where it makes sense to use it, and how it actually does what it does.

As computer scientists we can actually teach people this. We actually know this stuff.

Let us revisit Turing’s “imaginable digital computer which would do well in the imitation game.” “AI” as speech act is not an imaginable digital computer in Turing’s sense: instead it signifies a technology that neither knows what it is doing, nor is concerned with what (or whom) it is doing it to or for (the data and the people), and moreover it simply doesn’t care

It is at this point that the trust dynamics change considerably: It is not reasonable to argue that one ought to trust a simple rule-following machine as if it were a more advanced form of intelligence. And, while some may anyway from a position of ignorance (not everyone can, will, nor should become an AI expert), encouraging them to do so is a form of exploitation. But there should be a recognition that “AI” is being (ab)used to deflect accountability and responsibility. It is an idea, or perhaps a meme, planted in the psyche of the humans subjected to it to make them think that the machine is something “special” or somehow “insightful” when all it is, is a set of data processing and pattern recognition techniques, occasionally mixed with some advanced statistics.

In a 2018 article [11], Bryson argues that no one should trust AI: “AI is a set of system development techniques that allow machines to compute actions or knowledge from a set of data. Only other software development techniques can be peers with AI, and since these do not “trust,” no one actually can trust AI. ” This conflates the definition of “AI” as speech act with today’s AI technology. It is not the definition of AI as (imaginable) synthetic mind. As we have already said, conflating the two is problematic. As Bryson notes, what she refers to as AI is nothing more than knowledge from data. However, there is power here: to be able to extract knowledge from data is not a simple task; however, it is straightforward enough.

If Bryson conflates “AI” as power speech act and AI as a synthetic mind, she is not alone. A cursory look into the literature will show enough of this. Even that is not necessary since popular culture has done a fine job of elevating marketing “wool over the eyes” speak to a perfectly misunderstood and misused trope. It also has done a fine job of making us fear the consequences of AI as synthetic mind, but we’ll leave that aside for now, except to point out that this “Frankenstein” complex has also been leveraged by those who would deliberately misinterpret the distinction we are making here. The result of that, combined with the misconception about “AI” and AI, results in another power imbalance and the subsequent silencing of voices of those who would ask difficult questions around how we behave towards others.

What makes this more insidious is the fact that the creators (or owners) of “AI” as a power speech act are almost certainly aware of the distinction we are making here. The result is that the use of “AI” can misdirect our “understanding” of what AI could be and impose it on what AI (or algorithm) is. If this is done, then it becomes entirely possible to abdicate responsibility when things don’t work the way we understand that they probably should. And this, of course, is where Bryson is right on the mark.

The wording in the previous paragraph was careful. The “AI” or “algorithm” (that speech act) is almost certainly working the way it is intended to: it is deceptive, carelessly racist, usually misogynistic, and does exactly what its creators made it do. That they can then claim that it was “AI gone awry” is a travesty of justice.

Consider, for example, the debacle around school exams in the U.K. in 2020—where of course an algorithm was used to “grant” “appropriate” grades to the A-level students in their final year of high school. While the outcry was around “the algorithm got it wrong,” it is almost certain that it didn’t: It was doing exactly what it was told to do with the data it had. People are responsible for this situation. People specified the algorithm and its behavior. And yet, the algorithm or the machine is blamed. In this situation, there is no accountability because, as Bryson correctly maintains, the people who made it happen can deftly divert responsibility to the machine. The result is that the people who brought this situation upon the students of the U.K. remain in their positions, the programmers get paid, and everyone blames the “AI. ”

Real life should not work this way. In real life, we should learn from the mistakes we make and be held accountable, not be doomed to repeat them because we can divert attention away from us. Accountability allows us to “fail forward”—and improve on our behaviors and misconceptions so that the mistake isn’t repeated. Interestingly, whilst the situation has not been repeated in exactly the same way in the U.K. in 2021, responsibility is once again shifted from the people who should be responsible to others in a system—in this case teachers—so that if things go wrong again it’s their fault and not that of the people who directed them in the first place. Algorithms need not be followed just by computers.

And if we are correct, where does this leave us? With an extremely fragile but carefully constructed house of cards. Questions like, for instance, MIT’s Moral Machine examination of the “Trolley Problem” become exposed as little more than tricks with words, used to blind us to the actuality that the machines we believe are making decisions about life or death are no less than frauds: the “decisions” they make are the responsibility of the people who made them, nothing more or less.

Winfield et al. [12] note that one way of viewing trolley problems and their ilk is to provide a set of subjective moral judgments which the ethicist uses as data to formulate general principles. Suppose most people choose to flip the switch to save five people instead of one, so the naive ethicist formulates the principle of “the greater good of the greater number.” But then what happens when the problem is reformulated slightly: for example, consider five people requiring an organ transplant or they will die, and one healthy person. The application of that principle would result in killing the one to save the five, but we find that conflicts with the majority choice. This highlights a more informative way of viewing trolley problems: exposing the flaws in formulating (and worse, asserting) universal laws from limited datasets, demonstrating that abstract philosophical conundrums can illuminate, but do not necessarily eliminate, the difficulties of moral or ethical decision making (leaving aside the problem that, as Aristotle observed, the majority might be wrong anyway [13]).

To put it another way, naive principles sound good (and are good soundbites), but there is something more subtle going on, even in real honest-to-goodness people. Our problems start when we want to let a machine (or the machine’s designers) derive ethical principles from just the data from a single (or even multiple) scenarios, absent real understanding.

As one example of why The Trolley Problem is insufficient as a way to think about how to make policy for autonomous cars, consider the possible consequences of implementing them based on its outcomes. Suppose we arrived at the conclusion that a car ought to avoid driving into young children at all costs, even if that meant killing a middle-aged person. That is certainly one reasonable-sounding policy. Now suppose that we implemented this rule into every car that was on the road. And further consider what happens when said children realize this fact. We can now conjure up the mental image of gangs of children running amok in car parks or streets with impunity, causing cars to swerve dangerously, increasing the risk of them crashing into each other or other people. This is perhaps fun for the kids, but is clearly not helpful. Instead, consider that instead today we have “rules of the road,” zoning, norms about crossing roads, and behaving predictably, in part driven by an appreciation for the dangers associated with motor cars (autonomous or otherwise).

But it gets worse, if one were to formulate policy on the machine’s derived principles. This presents a similar risk to what Ostrom [14] observed about collective action situations involving provision and appropriation of common pool resources (i.e., how to avoid the supposedly inevitable tragedy of the commons). Someone sets up an experiment (for example, using a Prisoner’s Dilemma) in the lab, observe people are selfish, claim that people are innately selfish because it suits some neoliberal Randian world-view, and formulate policy, organize society, base an economic model, align the educational curriculum, on the assumption that people are selfish. But all the experiment demonstrates is that if we set up a situation where selfishness is the rational choice, we will see selfishness. What Ostrom observed is that people (or communities of people) can make up political meta-games to ensure that cooperation becomes the rational choice; and no surprise, we will see cooperation

Bryson defines trust as “a relationship between peers in which the trusting party, while not knowing for certain what the trusted party will do, believes any promises being made.” This is a carefully chosen and worded definition of trust which cleverly suggests that it only operates amongst peers, and only on promises. However, nothing can be farther from the truth. There is little doubt that humans are not the peers of their animal companions, and yet trust exists between human and, for instance, a service dog—even to the extent that the dog is trusted by the human not to lead them onto a road where there is an oncoming car (we would venture to suggest that such behavior is pretty trusting). Moreover, the relationship is notably bidirectional. But there’s more since no promises were ever made by either party—if there were, such promises would not be understood in any case, so the point of whether any were made at all is moot. The definition is not just problematic because it glosses over and cherry-picks the situations it wants. It is problematic because it basically implies human exceptionalism. This is not a simple problem, and it refuses to accept the possibility that non-human trusters and trustees could exist. We do not subscribe to this point of view.

Instead, let us return to a definition of trust that is more widely accepted and more inclusive. We are used to using Gambetta’s definition, which reads: “trust (or, symmetrically, distrust) is a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently of his capacity ever to be able to monitor it) and in a context in which it affects his own action” [15, p. 217]. This generally suffices: one can define “agent” in many ways, for instance, but if it is too restrictive, or perhaps not restrictive enough, depending on who is looking at it, we refer the interested reader to [16, ch. 3] for a more thorough examination of the phenomenon from different points of view.

A more acceptable definition of trust allows a more inclusive acceptance of who or what can use it. It does not preclude animals, for instance. Neither does it exclude machines. This is important. In the 1990s, Reeves and Nass carried out research into how people (humans) perceive technology (media) [17]. The experiments had to do with the way in which people interacted with a computer. They showed that humans tended to see technology as a social actor and behaved in ways that would be socially appropriate toward the technology. This was despite being told that they were interacting with a computer.

Beyond defining what trust is, it might be even more insightful to inquire instead what trust does. And apart from short-cutting the complexity of decision-making in n-player, m-action games (NP-complete, i.e., there is no known way to compute a solution in polynomial time), it is also a process involving the social construction of a conceptual resource (trustworthiness) that enables a community to solve collective action problems through common knowledge.

We would in fact argue that there are two aspects of trustworthiness: one is the individual innate one, that if you believe there is a rule, and that others’ expectations are that you will follow the rule, then you will follow the rule. This is what makes, or should make, someone trustworthy; but the socially constructed aspect of trustworthiness is the perception of others as to whether or not this is the case. The innate trustworthiness and the socially constructed trustworthiness of an individual might differ.

However, this process is vulnerable. As Edward Bernays wrote in Propaganda, and it is worth quoting at length [18, p. 9]:

The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in a democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country….We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of. This is a logical result of the way in which our democratic society is organized. Vast numbers of human beings must cooperate in this manner if they are to live together as a smoothly functioning society…. In almost every act of our daily lives, whether in the sphere of politics or business, in our social conduct or our ethical thinking, we are dominated by the relatively small number of persons…who understand the mental processes and social patterns of the masses. It is they who pull the wires which control the public mind.

In short, it actually doesn’t matter if we as academics in whichever ivory towers we inhabit tell people that they shouldn’t see computers that way. In fact, this could almost certainly be extended to other machines—cars, for example. Automation bias will see to it that they will tend to see things like this anyway, but it will be reinforced by this conscious and intelligent manipulation of their opinions. It almost certainly doesn’t matter if we tell people not to trust technology, they already do. This is part of the problem. Because “AI” as speech act is used, that trust is misdirected and abused, with obviously negative consequences. This isn’t about trustworthy “AI”; it’s about untrustworthy people

At the end of Artificial Unintelligence Broussard [19] says that “Turning real life into math is a marvellous magic trick, but too often the inconveniently human part of the question gets pushed to the side. Humans are not now, nor have they ever been, inconvenient. Humans are the point. Humans are the beings all this technology is supposed to serve.” We cannot but agree, and we would add this: humans are also the ones tricking the rest of us into believing otherwise. However, we would add a cautionary note: this is true for “AI” as speech act. It need not be true for AI as synthetic mind. Maybe we will find out.

Where is this taking us? We observe that trustworthy “AI” is used as a tool to deflect responsibility, or even to be a proxy for trustworthy developers and engineers. The result is based on abnegation of responsibility and willful ignorance. We should be doing better. For one thing, it is important to note the distinction between “AI” (as power speech act or marketing speak), which we see around us extensively, and AI (or synthetic minds), which we could perhaps have already constructed either on purpose or by accident, but which we should be careful to acknowledge as “different.” This last point is important: AI as a synthetic mind is not a binary “thing”; just as there are different kinds of intelligence, there are almost certainly different kinds (and levels) of synthetic minds. Some of these seem futuristic, while others, we contend, are already here or may be “out there.” And if we cannot ask ourselves how we should be treating that which is not “us,” we deserve everything we have coming.

We suggest that a democratization of both “AI” and AI is necessary in order to better inform the people who are affected by this deceit. It is not satisfactory to blame the computer—indeed it never has been, yet since we’ve had them, we’ve tried to do exactly that—what is needed is the means to explain:

What the system is doing;

Why it does what it does;

How it does this thing;

Why it does it this way;

In ways that the people affected by it understand. This should not be the responsibility of the machine, since we do not (yet) have AI capable of bearing responsibility for its behavior and operation. Let’s consider a more pressing question instead: who is responsible for “AI”?

This actually should not be too much to ask.

Author Information

Peter R. Lewis is an Associate Professor of Artificial Intelligence with the Faculty of Business and IT, Ontario Tech University, Oshawa, ON, Canada. He is an Associate Editor of IEEE Technology and Society Magazine, an elected Steering Committee Member of the IEEE International Conference on Autonomic Computing and Self-Organizing Systems, and a Member of the IEEE.

Stephen Marsh is an Associate Professor of Trust Systems with the Faculty of Business and IT, Ontario Tech University, Oshawa, ON, Canada. His research interests include computational trust, regret and forgiveness, wisdom, artificial intelligence, privacy and human computer interaction. He is an Associate Editor of IEEE Technology and Society Magazine.

Jeremy Pitt is a Professor of Intelligent & Self-Organising Systems with the Department of Electrical & Electronic Engineering, Imperial College London, London, U.K. Prof. Pitt is a Fellow of British Computer Society (BCS), the Institute for Engineering and Technology (IET), and a member of IEEE. He is currently the Editor-in-Chief of IEEE Technology and Society Magazine.

To view the full text of this article including references, click HERE.