“It Was the Best of times, it was the worst of times, … ” So begins Charles Dickens’ A Tale of Two Cities, set during the French Revolution of the late 18th century, highlighting the coexistence of radical opposites and contrasting extremes in a period of uncertainty, political upheaval, and rapid technological development.
How, then, might Dickens introduce a putative story set during the (so-called) information revolution of the early 21st century, highlighting the coexistence of radical opposites and contrasting extremes in a period of uncertainty, political upheaval, and rapid technological development? Would he write: it was an age of enrichment, it was an age of impoverishment? Or: it was an age of empowerment, it was an age of helplessness? Or: it was the epoch of credulity, it was the epoch of cynicism?
Undeniably, the ubiquitous global internet combined with mobile personal communication devices has enriched the lives of billions, but—and at the same time—has impoverished people and public life in many subtle ways. For example, ride-sharing technology has made personal transportation more convenient for those that can afford it, but it has also devastated the lives, families, and finances of taxi drivers (while massively enriching, financially, a very few) [1]. Putting a mega-casino in everyone’s pocket has made a form of entertainment (i.e., gambling) readily accessible to billions, but it has also devastated the lives, families, and finances of gambling addicts (while massively enriching, financially, a very few) [2], [3].
The ubiquitous global Internet combined with mobile personal communication devices has enriched the lives of billions, but it has also impoverished people and public life in many subtle ways.
Moreover, having potentially unrestricted access to knowledge and expertise and being potentially able to reach audiences of unparalleled size has greatly empowered the ordinary citizen. However, advances in neuroscience, data science, individual identification, and design for capturing attention have reduced many to little more than predictable (and so more easily manipulated) finite-state machines. Consequently, people can become helpless units in an aggregated revenue stream (cf., [4]), producing a reversion to a societal organization based on feudal arrangements [5], [6], [7]. And it remains a source of concern to progressive leadership that the world’s most technologically advanced nation should also be the most credulous with respect to, for example, mask-wearing and vaccination during a global pandemic [8]. Although this can be partially attributed to opportunistic cynicism in advancing an extreme political agenda, what sort of signals are being sent here?
The question underlying these radical opposites and contrasting extremes is this: clearly “we”—“we” to be interpreted here as meaning both as individuals and as a society—have developed a deep and special relationship with our devices, and in many dimensions, this relationship has been undeniably beneficial. But now, could this relationship, in the parlance of psychotherapy and counseling, be categorized as toxic ? In discussing this question from this perspective (but see also [4] and [9]), we first consider whether the individual relationship between the user and the device can be considered toxic, and then whether the collective (societal) relationship is toxic, at least for the individuals and societies concerned—generally speaking, the devices don’t care so much, even if it can be made to seem as if they do.
Toxic individual relationships
A toxic relationship can be defined as a relationship in which there is a repetitive pattern of behavior that causes more harm than good for either or both of the parties, whose well-being is then threatened in some way: emotionally, socially, physically, financially, or psychologically. There are two particular characteristics of toxicity: first, that the relationship is more harmful than beneficial; and second, that it is difficult, if not impossible, for the harmed parties to extricate themselves from the relationship.
People can become helpless units in an aggregated revenue stream, producing a reversion to a societal organization based on feudal arrangements.
Even taking the first characteristic as moot rather than given, it is the second characteristic that is perhaps the more disturbing. The user–device relationship now extends beyond contractual “customer lock-in” and now deliberately exploits addictive affordances in interface design [10], [11], while elegance in design aesthetics (“look and feel”) can create a deeper attachment even beyond brand loyalty [12]. Moreover, a typical organizational response to security flaws has been to introduce multifactor authentication. This has made the device a single point of failure on every critical path, in working and social lives. Without the device to verify identity, necessary access to email and other coworking, video-conferencing, and “productivity” apps is impossible; similarly, access to shopping, banking, and entertainment services can also be blocked. The fear of loss or theft drives payment for insurance (and, as it is said, there is no poor bookmaker, there is also no unprofitable insurance company, and for the same reasons), but, in a nice twist, the insurance policy is invalidated unless some sort of “Find My” feature is activated. So, the user pays twice: the first time with money as a sort of membership fee to gain access to the possibility of having insurance, and the second time with data, in order to actually activate the policy.
In these ways, a dependence is created for the user on the device. It can then be extremely hard to extract oneself from such a relationship, in particular, because of a range of cognitive biases that are exploited in the formation of the relationship and then reinforced in the perpetuation of the relationship, keeping someone trapped in an ever-deteriorating and worsening situation. These cognitive biases—many of which are identified by Prospect Theory [13], which predicts that individuals will assess their prospects for gains and losses asymmetrically—include status quo bias, confirmation bias, loss aversion, the endowment effect, self-blame, and the false consensus effect.
Status quo bias is defined as the tendency to prefer that “arrangements” should stay relatively the same, which in a toxic relationship becomes an irrational preference for an existing situation to persist rather than seeking a new situation. The separation anxiety triggered by nomophobia (the fear of being without a mobile phone is an identifiable neurosis, even though it is not included in DSM-V), together with surveys showing that younger people would prefer “to break a bone than lose their phone,” would suggest that being in possession of a mobile device and being continually connected to the internet is the status quo, and any threat to its persistence is severely disruptive.
Can our relationships with our devices be categorized as toxic ?
Confirmation bias [14] is identified as the tendency to accept information that confirms established beliefs and preferences, while under certain circumstances contradictory information is not just rejected, but can actively reinforce those beliefs. There is a proliferation of evidence (e.g., [15] and [16]) to suggest that well-being is diminished by, or through, mobile devices or social media apps running on those devices, in the ways that device usage affects memory, concentration, and sleep patterns, producing a wave of well-being apps in response (e.g., [17]). As a result, an awareness of the health risks involved has become almost common knowledge, but unless and until accompanied by the equivalent of health warnings on, and standardization of, cigarette packets, these academic studies are likely to fall victim to confirmation bias and possibly increase attachment; therefore, this common knowledge is unlikely to lead to any collective action to address it.
Loss aversion is the bias whereby people perceive the loss of utility in giving up (or going without) an object as greater than the gain in utility associated with its acquisition. This bias can be exploited by ensuring the device becomes a surrogate attachment object, for example, replacing the childhood teddy bear [18]. Then, instead of being an ever-present source of comfort as a child, the device–app pair becomes a potential source of existential validation through “likes,” “kudos,” reputation, or other forms of social credit scoring or approval rating. Although the pursuit of such validation can be manipulated through variable rewards and so be a cause of increased anxiety rather than satisfaction, losing it is perceived as much worse because it also loses the investment of time, energy, or money required to get it in the first place.
Loss aversion is given as one reason for the endowment effect, which refers to the cognitive bias whereby people rate an object that they already own of greater value (higher even than market value) than a similar item they do not own. This is further reinforced if the object is associated with emotional or symbolic significance. While there is more functionality in a single handheld object than was ever included in a Swiss Army knife, with a mobile device there is both functionality and personal history bundled up in the same object—for example, it serves as both camera and photo album, as both message exchange channel and “sentimental shoebox,” and both music player and “mix tape” medium (homemade musical compilation of an—often personal—playlist). Therefore, ownership is also critically important for leveraging the endowment effect, which might explain why service providers are so keen to offer device upgrades that are often functionally otiose and generally inconsistent with corporate claims of sustainability, although the upgrade is effectively paid for by binding the user to a long-term service contract (longer than the expected life cycle of the device).
Self-blame is a cognitive process where someone attributes the causes and consequences of adverse interactions in a toxic relationship to themselves, rather than the other party. Consequently, when someone realizes they have just wasted another evening (and maybe night) watching cat videos or bad movies on a streaming service, let alone lost a week’s wages in one session at an online casino, the tendency is to be resentful at oneself and be filled with self-loathing, rather than reappraising the toxic relationship with the device and/or “service” provider.
The false consensus effect is the belief that one’s opinion or situation is more common in the general population than it actually is. As a consequence, it becomes a reason not to terminate a toxic relationship because, “after all,” aren’t all relationships similar? Doesn’t everyone have the same experience? The irony is that, in the case of the toxic relationship between a user and their device, the consensus isn’t so very false.
Toxic societal relationships
Technology has always been about more than simply a route to increased productivity and economic growth; technology also provides the opportunity to enhance, enrich, and empower—basically, to improve shared qualitative values or people’s quality of life (however that is measured). On the flip side, technology also provides the opportunity to develop and project organizational control, which itself can be weaponized to quantitatively determine human value as an asset to that organization, or to reinforce asymmetric power relationships [19]. In exchange for the undeniable benefits of progress enabled by advances in science and technology, it is possible for other components of a functioning “democratic” society—human rights, political self-determination, knowledge gatekeeping, well-being and welfare, and infrastructure and security—to be undermined. The concern then is whether the societal benefits of technology are being outweighed by the collective harm, and if the relationship between society and technology has also turned toxic.
For example, the distraction and disassociation contrived by engrossment with the (so-called) “metaverse” can leave people unaware of, or lacking time to pay attention to, encroachment on or reduction of human rights. There is no material benefit for a group of people in multiplayer mode in some online game to be knee-deep in digitally generated goblin corpses, if in the “real world,” a bunch of malignant PPE goblins are going to pursue absurd pie-centric economics while curtailing the right to strike, restricting the right to protest, limiting freedom of movement, obstructing the right to vote, and taking other illiberal measures to diminish or remove rights that “the people” have struggled to have recognized for centuries.
Technology has always been about more than simply a route to increased productivity and economic growth; technology also provides the opportunity to enhance, enrich, and empower.
The liberal concern for human rights lost to distraction spills over to the issue of political self-determination. While the principles for sustaining common-pool resources through self-determination [20] have been observed in online computer games [21], the central problem, according to Bartlett [22], is that the drivers of technology and democracy are in conflict. The irony is that the democratization of technological access has resulted in a de-democratization of political control: the decentralization inherent to both direct or representative democracy is not reflected in centralization due to the so-called “platform revolution” [23], the network effect leading to the domination of “BigTech,” and the privatization of what were once considered, but are no longer, public spaces [24]. The essence of this is that democratic processes are, intentionally, slow: they are reflective, exceptionable, deliberative, accessible, adjustable, reversible, and seek consensus. Compare this to the “smart contracts” of distributed consensus technologies, which are impulsive, deterministic, irrevocable, opaque, fixed, and essentially majoritarian.
In the consideration of both rights and self-determination, there are two further points worth noting. The first is that it is important to identify the trust (and indeed the faith) relationships: in democracy, one has to trust (have faith in) a set of interlocking institutions (deliberative assemblies, political parties, independent judiciary, free press, and effective enforcement) and shared values, in particular, legitimate consent, responsibility, and accountability; these values are required for continuous systemic improvement and tangible consequences for misrule or corruption. The second is that none of this actually exists except in the heads of those involved; and none of it matters except the one that does, i.e., the social construction of institutional reality relative to the “real world.” Authoritarian technocrats have managed to detach established trust relationships from traditional democratic institutions, reattach “trust” to themselves instead, and compromise social construction through the creation of false narratives in misinformation and disinformation.
Propagating misinformation is also the key to undermining traditional knowledge gatekeepers, in particular, the universities and the press [25]. Research was once the preserve of a trained and educated elite, but for all that, at least objective and in pursuit of the common interest. This has been supplanted by the exhortation of any snake oil salesperson with a keyboard and an internet connection for people “to do their own research,” by which they mean to use a search engine (whose algorithms are biased to prioritize the content of maximum provocation) and give in to confirmation bias, rather than conduct a double-blind randomized controlled trial and submit their results for peer review. Moreover, and quite evidently, the ability to weaponize misinformation, data science, and targeted advertising to insidiously cause maximal disruption has not gone unnoticed, or unpracticed.
Increasingly, the privileges that technology brings only belong to those that can afford it, especially when the state withdraws from the provision of national infrastructure and public services. Those who are sufficiently asset-rich to own an electric vehicle can be paid to consume electricity during storms, while those without have to choose between heating or eating. Those who earn enough to pay for ride-“sharing” services can outbid others for a taxi, while those who do not must wait for unreliable and overcrowded public transport. Those who (already) have available resources such as home computers and unlimited internet access can access online learning to continue their education during a global pandemic, but for those who have not, even that which they had (a right to an education) is restricted or denied. Those who are struggling with mental health issues but have enough disposable income to look after themselves can get a consultation with a professional therapist or counselor, and those without can make do with LLM-generated platitudes from a cheap chatbot [26].
Consequently, the argument over the social contract determining what is a privilege and what is a right no longer only impacts traditional issues of health, education, welfare, and movement, but now also digital rights like privacy and even security, both of which are increasingly only available to those that can afford them (consider again the example of the insurance policy discussed earlier in this article). However, it is not just the citizen–state relationship that is being recast by internet technology; it is also the citizen–citizen relationship. In particular, the ascent of swipe culture is particularly pernicious (be honest: who hasn’t encountered someone or something intractable in the “real world” and wanted to swipe left). But the real influence is more insidious: those things that are hard or need work, like maths and relationships, are treated as equally disposable commodities.
It is easy to insist that if only people were better educated and more informed, then they would make the right decisions.
Another right versus privilege argument is being contested over well-being, in particular in that aspect of well-being concerned with attention. The term “attention economy” has been coined to describe the fact that there is a finite amount of a particular resource (there is only so much prefrontal cortex available for higher cognitive functions) and technology companies are continually competing for a bigger share of that resource. This is partly because, as private companies operating in late capitalism, they are legally obliged to maximize shareholder profit; and they maximize revenue not through the technology or the service but by offering advertising space. BigTech companies are effectively in control of a massive billboard, which is always visible, always flashing, and we are all being compelled to gawk, forever.
Summary—Is there an antidote?
“ … It was the age of wisdom, it was the age of foolishness … ” So continues Dickens’ Tale of Two Cities, and perhaps this is an apposite contrasting pair for the modern world: at the time of widespread knowledge and awareness, we refuse to take action against an existential crisis, one that we have caused to ourselves and prefer to look at an endless stream of pictures of kittens (and adverts; and the stream is endless, by design … kerr-chinggg … ). Meanwhile, another existential crisis, one that we have caused to our environment, continues unchecked.
Of a relationship, when its benefits are outweighed by the harm that it is doing, then it is said to be toxic. Moreover, there are a number of cognitive biases that stop people from extricating themselves from such relationships. This article has argued that the benefits–harms tradeoff in the individual user–mobile device (Internet/social media apps) relationship has tilted more toward harm (although quantifying this claim is extremely problematic; see [27]). In this case, the relationship could be categorized as toxic, but the same cognitive biases are operative that prevent us as individuals from redefining our relationship with our elegantly designed, extremely convenient, deliberately addictive technology. Furthermore, it was argued that the benefits–harms tradeoff in the collective (societal)-technology tradeoff has also tilted toward harm. Surely, to echo a much earlier prescient paper, we can do better than this [28].
However, there is no silver bullet magic solution. However, all is not lost, if we could synthesize the recommendations of authors such as Zarkadakis [29], Bartlett [22], and Klein [30] in conjunction with promoting the use of public interest technology and platforms [31] and a framework for civic participation based on contributive justice [32], [33]. However, perhaps what we need most of all to deal with cognitive bias is the praxis of what might be called collective detachment theory: how do we, as a society, detach from a devotion to our devices and reattach our collective selves to our shared values?
Author Information
Jeremy Pitt is a professor of intelligent and selforganizing systems with the Department of Electrical and Electronic Engineering, Imperial College London, SW7 2BT London, U.K. He is a Fellow of the British Computer Society (BCS) and the Institute for Engineering and Technology (IET) and a Member of IEEE. He is currently the Editor-in-Chief of IEEE Technology and Society Magazine. Email: j.pitt@imperial.ac.uk.
_______
To read the full version of this article, including references, click HERE.
________