Book Review: The AI Mirror

By on December 2nd, 2025 in Articles, Artificial Intelligence (AI), Book Reviews, Human Impacts, Magazine Articles, Social Implications of Technology, Societal Impact

The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking–By Shannon Vallor (New York, NY, USA: Oxford Univ. Press, 2024)

Reviewed by Luke Fernandez

In The AI Mirror, Shannon Vallor uses the mirror as a metaphor for unpacking what AI is. Vallor, an acclaimed philosopher of technology, argues that AI, like a mirror, is a device of reflection, but, where a mirror typically just reflects the face that stares into it, “AI systems mirror our own intelligence back to us.” Like mirrors, AI does not reflect perfectly or benignly. It distorts what it reflects. It indulges our vanities. The reflections, which seem at first glance to have depth, are superficial. Vallor argues her case with erudition, deftly incorporating insights from a wide range of thinkers including ancients like Plato and Aristotle, as well as modern and contemporary thinkers like Joseph Weizenbaum, Safiya Noble, Timrit Gebru, and Abebe Birhane.

 

Vallor argues her case with erudition, deftly incorporating insights from a wide range of thinkers including ancients like Plato and Aristotle, as well as modern and contemporary thinkers like Joseph Weizenbaum, Safiya Noble, Timrit Gebru, and Abebe Birhane.

 

Of all the thinkers, Vallor cites, and often also tussles with (Aristotle and Plato especially), she seems especially inspired by Joseph Weizenbaum, who not only invented the first chatbot in 1966 but who was also an inveterate critic of what he called “the computer metaphor” or the tendency to think of computers as actual brains and as brains as computers. Weizenbaum acknowledged that the computer metaphor was a “powerful” way to “understand many aspects of the world,” but it was not sufficient. As he put it, a metaphor “enslaves the mind that has no other metaphors to call on” [1]. If other metaphors were needed, what were they? Weizenbaum never offered an extended answer to this question, but Vallor, to her enormous credit, does. The advantage of the mirror metaphor over the computer metaphor may at first glance seem trivial, but, as one reads further into Vallor’s book, the advantages are profound because they reveal problematic behaviors that the computer metaphor conceals.

First, as Vallor explains, AI, like a mirror, does not always offer precise representations of the thing it is reflecting. Often, the mirror contains minute imperfections or even large distortions like those in a Carnival fun house. Likewise, as many critics have noted, AI can depict biased and distorted representations of the world, whether in AI search, in the COMPAS algorithm (which predicts the likelihood that a criminal defendant will commit another crime), or when doing facial recognition, to take some of the more classic examples [2]

Second, AI, like a mirror, can encourage narcissistic tendencies. Vallor illustrates this by recounting the myth of Narcissus who was so captivated by his reflection in a body of water that he sundered his relationship with others—even with the nymph Echo who tried to flirt with him. AI, at least in some contexts, might trigger similar asocial proclivities when people use Replika to cultivate relationships with AI boyfriends and girlfriends instead of with real flesh-and-blood people.

Beyond these obvious forms of asociality, Vallor warns that the narcissism bred by mirrors is tied to other dangers. Narcissus was so captivated by his image that he remained literally “fixated, confined, and immobilized.” This points to the third way in which an AI, like a mirror, can be a liability: instead of facilitating change, it freezes us in place. The mirror metaphor hints at this danger because mirrors, or at least the mirrors that are used in vanities, reflect backward at us rather than forward. Vallor claims that something similar happens with AI:

“AI tools… [are]… mirrors pointing backwards, narrowly reflecting only the dominant statistical patterns in our data history. Such mirrors, when used not as reflections of the past but as windows into our future, serve as straitjackets on our moral, intellectual, cultural and political imagination.”

In this sense, AIs “are profoundly conservative seers.” Vallor thinks that this conservatism might not be sufficient to redress the daunting new challenges humanity faces, including “accelerating climate change, biodiversity collapse, and global political instability.” To face these challenges, we may need “moral” and “political” virtues that AI, at least in its present incarnation, reflects dimly if at all. Citing the existentialist philosopher Ortega y Gasset, Vallor argues that, unlike other species, we are animals that can engage in “autofabrication.” We can make ourselves anew. We can, as Ray Kurzweil once put it, “go beyond our limitations” [6]. However, to do so, we need an imagination that is not constrained by the conservatism of AI.

 

Philosophers often remind us that their business is not so much to proffer the right answers so much as the right questions.

 

Philosophers often remind us that their business is not so much to proffer the right answers so much as the right questions. The computer metaphor does this to some extent, but the mirror metaphor does it better. The computer metaphor constantly impels the AI industry to ask whether we have reached AGI yet, or whether computers can actually think, or whether humans are merely “meat machines,” or whether AI is aligned with human values. These questions, while interesting, are not the only questions that need to be asked about AI. Here, the mirror metaphor can help. Rather than asking “Have we achieved AGI yet?” the mirror metaphor asks “Are AI’s reflections biased?” Rather than asking “Is the brain a computer and a computer a brain?” the mirror metaphor asks “Do AI’s reflections (and our reflections on AI) make us more (or less) narcissistic?” Rather than asking “Are AIs aligned with human values?” the mirror metaphor asks: “Do AIs, insofar as they only reflect backward, impede us from developing the moral skills we need to steer through the crisis we presently face? And do they inculcate the virtues to address our current predicaments?”

In this manner, Vallor’s mirror metaphor overcomes many of the limitations that Weizenbaum found in the computer metaphor. The computer metaphor fuels inquiries that are interesting to cognitive science—mainly how closely the computer models an individual brain. In contrast, the mirror metaphor sparks questions that engage scholars who are interested in the relationship between technology and society. Vallor is to be commended for refining and enlarging on a metaphor that kindles these latter sorts of questions.

Beyond framing these questions, Vallor also tries to provide answers to them. Here, her successes are more mixed. She is most convincing with respect to the question of bias. Although technology is sometimes conceived as a neutral tool, Vallor sensibly maintains that “technologies always embed the human values that shape our design choices.” Here, she is in agreement with the vast majority of scholars who think about the relationship between technology and society from past ones like Marx, Engels, and Mumford, to living ones like Langdon Winner, Jenny Davis, or Matteo Pasquinelli [7], [8], [9], [10], [11]. A technology can be, as Vallor says, “polypotent,” meaning that it can be put to good or bad uses, but a technology also comes with particular affordances that encourage some uses while discouraging others, whether that technology is an automated loom, Babbage’s computer, a highway overpass, or AI. Those uses, in turn, usually forward the dominant interests of society. Thus, on this point, Vallor is convincing, aligning with a storied group of tech scholars past and present who maintain that technology reflects society, but it reflects most strongly the interests and perspectives of the advantaged and powerful.

 

If we outsource decision-making to AI and spend all our time chatting with chatbots rather than with fellow humans, we are unlikely to develop the civic skills that we need for coordinated collective action.

 

In persuading readers that the AI mirror fuels narcissism, Vallor is also successful—albeit with a few more caveats. When AI functions like a traditional mirror, it certainly is capable of encouraging narcissism in ways similar to the way Narcissus’ reflection did. Indeed, when mirrors and photographic portraiture proliferated in the 19th century, they spurred vanity, self-reflection, and the development of the modern individuated self [12]. (Personal plug: in the book Bored, Lonely, Angry, Stupid: Changing Feelings About Technology, from the Telegraph to Twitter, my co-author and I trace this historical development in detail.) As Vallor notes elsewhere, chatbots, in their default mode at least, stoke narcissism for they often apologize and capitulate when, in response to their output, we insist they are wrong [13]. They do not challenge or resist our own convictions as much as fellow humans. In the absence of this friction and resistance to our own views of the world, narcissism grows relatively unchecked.

However, while these examples highlight the narcissistic affordances in AI, one should also consider the caveats. Unlike vanity mirrors that are conventionally used to reflect individual selves, AI LLMs offer up much broader reflections of society. Their reflections are much more capacious than the hand mirror or the vanity mirror and, as such, create opportunities for users to not only bask in their own reflections but to inquire about the world outside their own heads and outside their direct field of vision. This qualification has its analog when reviewing similar criticisms about the Internet. At least since Eli Pariser’s published The Filter Bubble we have been aware that surfing the Web can exacerbate filter bubbles and confirmation bias. However, there is some countervailing evidence. In “ Avoiding The Echo Chamber About Echo Chambers: Why selective exposure to like-minded political news is less prevalent than you think,” Guess et al. [14] suggest that some people use the Internet to develop more ecumenical and catholic perspectives.

This knottier conception of how AI reflects also complicates Vallor’s claim that AI fails to inculcate the virtues we need to redress the impending crisis the world faces today. As Vallor also maintains on The Artificiality: Minds Meeting Machines podcast, the challenges that we currently face are “large-scale coordination problems” that require “coordinating with others” and, therefore, demand the cultivation of “collective virtue[s].” In this sense, Vallor is deeply communitarian (I would say even Aristotelian if it were not for the fact that she criticizes Aristotle for being blind to the political virtues in technical work). She sees an acute need for present-day humans to develop their political and moral capacities and to coordinate them with broader publics. Because she thinks that AI breeds narcissistic tendencies, she has reservations about its ability to develop these more communitarian virtues.

She has a point here. If we outsource decision-making to AI and spend all our time chatting with chatbots rather than with fellow humans, we are unlikely to develop the civic skills that we need for coordinated collective action. Put another way, although there are virtues in Emersonian self-reliance and thinking for oneself and by oneself, ultimately, those thoughts need to be expressed in public. For it’s only in this public exercise where we encounter other people’s resistance and reaction. We need that feedback in order to refine our private thoughts and make them useful for more coordinated action.

Vallor, of course, is not the first to worry about civic deskilling or the way that deskilling is fueled by technology and technical cultures. Robert Putnam in Bowling Alone and Phillip Slater in The Pursuit of Loneliness worried that Americans were losing their civic inclinations largely as a result of suburbanization and a desire for privacy. Joseph Weizenbaum thought that computing culture was plagued by “instrumental reason”—a form of reason that refuses to consider moral questions altogether. Arendt worried that scientism and technocracy were turning political problems into technical problems—thereby reducing people’s ability to exercise political judgment [15], [16], [17], [18]. On the other hand, Matthew Crawford in Shop Class As Soulcraft and Richard Sennett in The Craftsmen argue that technical work can catalyze moral growth [19], [20]. The political scientists like Archon Fung and Joshua Cohen highlight the way the Internet has opened up opportunities for political expression by marginalized groups that did not exist in earlier eras [21]. One important takeaway from all of this work is that there is no simple answer as to whether technology and technical culture enrich or impoverish moral and political life. It depends on the technology and on the particular form of morality or politics that one is talking about.

Nor is there a simple zero-sum relationship between individualism and communitarian ways of living. Sometimes, time spent alone (chatting with an AI) can enrich time spent with others. What role AI will play in shaping humanity’s varying commitments to individualism and communitarianism is an important question to ask, and it is posed in an interesting way when one thinks of AI as a mirror that reflects our (past) virtues. However, the story of AI is still unfolding. As the technology scholar Lee Vinsel has observed (Vallor compliments him in her book), we are currently in an AI bubble, which is a “bad information environment” for assessing what AI’s long-term impacts on politics, economics, and virtue will ultimately be [22]. Vallor helps us ask the right questions, but only time will yield conclusive and balanced answers to those questions.

Beyond supplying readers with a metaphor for understanding AI, Vallor wants to “revive our moral capacities” and our desire to exercise them. She is especially interested in this project because she thinks that we are losing the courage to judge. In this important regard, Vallor shares a deep kinship with Joseph Weizenbaum who was also interested in bolstering this courage. These affinities are highlighted when Vallor, who quotes Weizenbaum many times throughout her book, recites this passage: “… just, when in the deepest sense man has ceased to believe in—let alone to trust—his own autonomy, he has begun to rely on autonomous machines.” In the age of AI—in the age of autonomous machines—the courage to act autonomously seems to be on the wane. Vallor, as Weizenbaum did too, seeks new metaphors for fortifying it.

Reviewer Information

Luke Fernandez is an associate professor in the School of Computing at Weber State University, Ogden, UT 84403 USA. Fernandez has a PhD in political theory from Cornell University, Ithaca, NY, USA. Email: lfernandez@weber.edu.

 

To read the full version of this article, including references, click HERE.