One of the key requirements for the EU Multimedia Access through Personal Persistent Agents (MAPPA) project1 (1998–2000) was to build an electronic commerce system that “increased loyalty.” MAPPA was therefore one of the first computer systems that attempted to affect directly a qualitative human value associated with a conceptual social relation that existed between two or more human systems (people and organizations), that in itself was a conceptual resource that could be used to encourage prosocial behavior and mutually preferable benefits from the relationship [1].
However, the idea of computational trust [2] had been proposed and developed some years prior to the MAPPA project. However, computational trust was largely concerned (in the first instance) with the representation of, and reasoning with, trust between computational systems (or software agents). While this extended to trust human-computer interaction and computer-mediated communication, when issues like privacy are significant, the trust dimension has become particularly salient with the increase of “responsible” and “ethical” Artificial Intelligence, and the question of trust in intelligent machines [3].
But going back to the turn of the millennium, the telecoms companies (telcos), the nascent Internet Service Providers (ISPs), and the emerging “tech giants” readily recognized that customer loyalty and trust could be leveraged for (at least) two commercial opportunities, ostensibly for the benefit of their customers. The first opportunity was to create customer lock-in. Customer lock-in occurs when a vendor contrives a situation whereby a customer is completely dependent on the vendor for services and products, and whereby customers are unable to switch to a competing vendor without incurring substantial costs. (A prime example is when Microsoft bundled not just their browser (Internet Explorer) with the operating system (Windows), but entangled the browser code with the operating system kernel [4].)
The second opportunity was to create a market segment of one. Electronic systems enable retail organizations to adopt different sales and marketing approaches using electronic commerce; i.e., rather than supply and demand, electronic systems enable companies to detect what a customer wants, and then to customize a product or service so that it fits an individual need. This is referred to in the literature of Customer-Relationship Management (CRM) as mass customization or accelerated 1-1 (one-to-one) [5]. Moreover, electronic systems make it much easier for marketers not just to target specific customer segments, but to target individual customers (hence a segment of one).
Despite the size of the market (i.e., everyone connected), the Internet actually makes it easier to create segments of one, for four reasons. Firstly, it is much more straightforward to capture, transmit, identify, and store detailed behavioral patterns. Secondly, the paucity of gatekeepers at the application layer (as a consequence of the network effect concentrating gatekeeper functionality into just a few platforms [6]) makes it easier to link to other financial, demographic, or preference data (extra indicators of lifestyle and individual profile). Thirdly, as more and more activity transitions online, it becomes easier to capture more and more data at multiple user (“customer”) contact points. Finally, after personal data that is given away freely (but piecemeal) is integrated (but unknowingly), it is relatively straightforward to apply well-known statistical techniques (e.g., logarithmic regression) in order to almost uniquely identify every individual — and so data is converted into a highly valuable commodity.
One consequence of the confluence of the forces of customer lock-in, segments of one, and unprecedented data personalization — is the enabling of Internet users to be atomized into individual revenue streams.
Therefore, one consequence of the confluence of these forces — customer lock-in, segments of one, and unprecedented data personalization — is that it has enabled Internet users to be atomized into individual revenue streams. This has enabled application layer gatekeepers2 to build “mega-platforms” that eschew messy social relations like trust and loyalty, which are somewhat contingent on the consumer, and that instead effectively unify three “classical” business models:
- brokerage — create a market place (or online, a platform) to connect buyers and sellers; charge commission on each transaction;
- razor and blades — offering one product cheaply or even at a loss, in order to increase sales of a second product that may be dependent, complementary, or, in some cases, essential3; and
- lottery — taking a small amount of money from a lot of people creates a lot of money; although in this case, the “ticket” is advertising: billions of adverts are pushed every day, even at micro-cents per ad, this is “a lot of money.”
While the emergence of Internet gatekeepers has created a few fabulously wealthy individuals (with all the implications that has for societies with vastly asymmetric and unequal income distributions [7]), and some unscrupulously ruthless assaults on democratic practices and processes (see, for example, the Cambridge Analytica scandal [8]), these mega-platforms have, with the addition of one extra ingredient, combined lock-in and loyalty to create a grave, and perhaps unexpected, consequence.
The extra ingredient is psychology; and the unexpected consequence is what might be called digital dependence.
Mega-platforms have — with the addition of psychology — combined lock-in and loyalty to create a grave consequence: digital dependence.
In many areas of human endeavor, the Internet has been combined with psychological manipulation, not just to create and aggregate millions of individual revenue streams, but to ensure those streams are predominantly one way. Furthermore, this is more than just the use of sales techniques like pre-suasion [9]. For this is no mutually beneficial loyalty relationship, nor a mutually supportive trust relationship: this is full-on exploitation of a dependency relationship.
In a brutal exposé of the gambling industry, Dow-Schüll [10] uncovers the various psychological manipulations that casinos and bookmakers use to entice, hook, and then rinse their clients (punters). Electronic systems make these manipulations even easier. For example, in the U.K., Fixed Odds Betting Terminals (FOBT, such as slot machines) legislation decreed that a fixed percentage of the money gambled had to be returned as winnings. However, while the chances of winning were fixed, the chances of nearly winning were not. Punters would then find themselves “nearly winning” much more often than if the slots were completely random. But not at all by chance, nearly winning creates enough of a dopamine hit to encourage the punter to play again (the so-called “near miss” effect [11]). Online gambling makes the process easier, more pervasive and more immediate — indeed it satisfies all the requirements of convenience [12] — and its consequences, in the form of self-destructive behavior4 are increasingly common, and correspondingly toxic.
Nevertheless, the addictive features of online gambling have also made the transition to the video-game industry. This is particularly observable in the proliferation of games, especially online and mobile games, which offer loot boxes. Loot boxes in video games are a kind of lucky-dip container offering a random bundle of in-game entities (items or characters) in exchange for real-world currency. Electronic games increasingly use a variety of psychological and psycho-economic mechanisms to encourage players to buy loot-boxes: progression and advancement in the game might be made easier by items or characters only available through loot boxes; the “near miss” effect and the scarcity of valuable objects; the “disappointment” effect when confronted by yet another commonplace and worthless object; the macho competition engendered by player-versus-player gaming modes requiring upgrades in order to be competitive; the potential lack of feedback in online payments; and the unwitting role of “whales,” players with disposable income willing to spend substantial amounts of money chasing rare objects and so dragging prices up for everybody else (much as tourists who pay exorbitant rates for tuk-tuk rides drag the prices up for the local population). Unsurprisingly, this merging of gambling and gaming has had anti-social repercussions [13], and regulation has once again lagged behind technology (which has been called the “law lag”; see also [14]).
However, it is not just the gambling and gaming industries that have cynically taken advantage of the disconnection between physical and digital environments, and the fact that neural pathways that evolved for the former can be exploited for commercial gain in the latter [15]. The purveyors of social media platforms have also worked out how to activate the same “addictive” mechanisms, notably related to neurotransmission, that are used to hook gamblers and gamers. The interface and affordance mechanisms include: “pull to refresh” as a kind of social media bandit arm on a slot machine (with the same feelings of anticipation and reward, with the reward not always satisfied); the liberal use of the color red for alerts demanding attention; the role of “likes” and “kudos” in the pursuit of social acceptance and affirmation; the exploitation of the need for meaning and narrative (mythos) in the corrupted form of “stories”; the suggestion of “fomo” (fear of missing out) requiring regular attention be paid to social media feeds; and the maintenance of streaks [16], based on nothing more than milking futile and febrile attempts to avoid the inevitability of self-disappointment, less likely than keeping new year resolutions.
There is some medical uncertainty over whether excessive, obsessive, or compulsive use of online platforms (whether for gaming, gambling, or social media) is clinically addiction, by definition, i.e., as characterized by an inability to stop using a substance, failure to meet obligations, etc., but it seems clear that some form of dependence is being activated. Some of the symptoms of dependence in a person include:
- Tolerance: the person requires increased use to achieve a certain effect, and may experience some associated tactile responses that are not real — some people check their phones hundreds of times per day, and phantom phone vibration is a reported phenomenon [17];
- Lack of attention: the person becomes less able to take care of their work, family, or social commitments — it is often young children who say they would like to lower their parents’ screen time than the other way round [18];
- Carelessness and insensitivity: despite being told or warned, the person puts potentially compromising material on social media, risking careers and relationships — see, for example Internet-shaming and Twitter-storms [19];
- Deteriorating mental health: a lowering self-esteem, caused by an inaccurate comparison with the supposedly perfect life of social media influencers; compounded by the increasing awareness of little self-efficacy with regards to ending the dependence; the person has is already too heavily invested to back out, and anyway there is an absence of a viable alternative; and
- Toxicity: unlike addiction to chemical substances where the downward spiral of dependence and addiction ends in physical death, digital dependence can lead to a strange sort of digital death: a twilight of the persona involving a loss of cognitive skills, loss of autonomy, loss of self-control, loss of individuality…
In conclusion, our perception is that, even until very recently, sales and marketing techniques in the digital sphere tried to build on social relationships like trust and loyalty, even as they tried to tie customers to organizations with a lifelong relationship; even as “loyalty” meant no such recognizable thing (switching customers often got better deals and preferential treatment to long-term customers; “loyalty” cards offered exiguous rewards to the customers in return for the benefits that the service-provider derived from aggregating their data; and so on).
In comparatively few years, the technological innovation of the “SmartPhone,” in conjunction with the potential of statistical machine learning and a deep understanding of the human psyche, has corrupted even a pretense at loyalty and replaced it by a dependency relationship, creating digital dependence. For application-level gatekeepers, their vision of the future is not a boot stamping on a human face forever: instead they will come for you with a like, a smiley emoji, and a gaslight, with one hand clamped on your throat and the other hand deep in your pocket. In the short-term, the only way to bring about meaningful change is by regulation, putting boundaries on political advertising, psychological manipulation, and willful misinformation. In the long term, a more structural approach is required, perhaps starting with an identification of the factors that together determine the scale and nature of the possible harm caused by digital dependence: for example the mental and physical harm caused to individual users; the legitimacy of the intent to induce dependence, on a spectrum from habituation to addiction; and the deleterious effect of digital dependence on communities and societies.
Unfortunately, digital dependence seems to make people susceptible to voting for strange things, least of all their own better interests.
Author Information
Jeremy Pitt is Professor of Intelligent and Self-Organizing Systems at Imperial College London, U.K. Email: j.pitt@imperial.ac.uk.