The Intelligence Factor: Technology and the Missing Link

By on June 20th, 2022 in Articles, Artificial Intelligence (AI), Ethics, Human Impacts, Magazine Articles, Social Implications of Technology, Societal Impact

“…By design, machines are our obedient and able slaves. But intelligent machines, however benevolent, could threaten our existence because they will be alternative inhabitants of our ecological niche. Even machines merely AS clever as human beings will have enormous advantages in certain competitive situations. They cost less to build and maintain, so more of them can be put to work with given resources. They can be optimized to do their jobs and programmed to work tirelessly [1, pp. 136, 138].”

In his new book, Future Minds: The Rise of Intelligence From the Big Bang to the End of the Universe, author, speaker, and futurist, Richard Yonck, covers a lot of ground under the umbrella of emerging intelligence in the universe, including the 21st century artificial intelligence (AI), its promise, and threat.

After citing a number of diverse definitions of intelligence, Yonck settles on a definition by physicist, mathematician, and entrepreneur, Alexander Wissner-Gross, as:

“Intelligence acts as to maximize future freedom of action” [2, p. 18].

Yonck takes this definition and runs with it. If one sweeps across the entire span of evolving life, what bubbles up is just what Wissner-Gross claims. In the realm of life, freedom of action enables access to more possibilities for future action for those species that smartly avoid getting trapped in evolutionary dead ends. As Wissner-Gross puts it in his TEDxBeaconStreet presentation, “A new equation for intelligence,” “[I]ntelligence doesn’t like to get trapped” [3]–[4][5]. One sees this in the game of chess in which the winner has avoided the dead end of checkmate.

“Intelligent machines, however benevolent, could threaten our existence because they will be alternative inhabitants of our ecological niche.”

In chess, control of the center of the board is a tactical goal because it allows greater mobility of pieces and access to positions on the board, maximizing future freedom of action, in this case game-winning chess moves. The evolution of upright walking freed the hands for far more sophisticated manipulation. In his article, “The real reasons we walk on two legs and not four,” BBC journalist, Richard Gray, writes that “It is widely recognized that permanently standing up opened up new opportunities for our ancestors to touch, explore, pick up, throw, and learn.” According to Chris Stringer, a leading anthropologist at the Natural History Museum, London, “Walking upright freed the hands for carrying and manipulating tools … It allows longer-distance walking and, eventually, endurance running. Ultimately, it may have been a key step that led our ancestors’ brains to grow” [6]. Walking on two legs, upright stature, and freeing the hands for tool use coupled to social cooperation greatly multiplied the power of multiple hands and brains.

Freedom of action represents free energy or “exergy,” the localized shrinking of entropy, the measure of dissipated-order, spread-out useless energy. Maximizing freedom of action implies the competitive advantage of intelligence that successfully achieves goals. It also represents purposeful connecting of previously disconnected matter and energy, the creation of order out of disorder. In our human world, this could be interpreted as the solving of problems.

Gravity as Intelligence

In his article, “On the origin of gravity and the laws of Newton,” physicist, Erik Verlinde, linked both the existence of gravity and Newton’s laws to the maximizing of entropy.

As Verlinde puts it:

“[T]he central notion needed to derive gravity is information. More precisely, it is the amount of information associated with matter and its location … measured in terms of entropy. Changes in this entropy when matter is displaced lead to an entropic force, which … takes the form of gravity. Its origin therefore lies in the tendency … to maximize its entropy” [7]–[8][9].1

Based on Wissner-Gross’s tie-in of intelligence to causal entropic force, one can tie Verlinde’s theory to intelligence lying at the core of how the universe works. The very existence of the universal attractive force of gravity2 can be seen as the means by which entropy can be maximized. Gravity gathers vast gas clouds into concentrated spheres and squeezes the gas to the millions of degrees needed to initiate fusion and the subsequent release of potent, transformable low-entropy energy as photons of light. In doing this, the resulting star irreversibly consumes its mainly hydrogen fuel that eventually leads to its death. The entropy produced by fusion compensates its shrinking in Wissner-Gross’s definition of intelligence. If a star has a solar system, and a planet finds itself in the so-called, “Goldilocks zone”—not too hot/not too cold, not to big/not too small, not too close/not too far from its star, axis tilted enough but not too much, …—with the right kinds and proportions of elements, it could be ripe for the evolution of life as another means for maximizing future freedom of action coupled to the most efficient production of entropy over time.

Based on Wissner-Gross’s tie-in of intelligence to causal entropic force, one can tie Verlinde’s theory to intelligence lying at the core of how the universe works.

Entropy as the Self-Similarity of Self-Similarities

Self-similarity, as cosmologist Robert L. Oldershaw proposes in “Nature adores self-similarity,” applies up and down the scales, including the biological and ecological evolutions of life.3 The branch resembles the tree, the stem, the branch, and the tree. The veins in the leaves resemble river tributaries, resemble the branching of the circulatory system, and resemble the branching of nerves. The self-similar architecture inside our lungs allows us to breathe. “Without self-similarity the Earth would be devoid of vegetation.” A familiar example is “a regular cauliflower [which, like a Matryoshka doll,] often displays self-similarity in at least seven fractal scales” of spirals nested within spirals [10]. Without self-similar architecture, we could not digest food; but that would not matter because there would be no food and we would not exist [11].

Though the scope of ways in which we, that is all of life, share self-similar patterns are vast, there is one pattern, however, pulling the strings backstage of all self-similarities: the second law of thermodynamics. In “Life, gravity and the second law of thermodynamics,” Australian astrophysicists, Charles H. Lineweaver and Chas A. Egan, claim that entropy “is the unifying concept of life because the second law is universal; it applies to everything. Man, [woman], machine, microbe, or the entire cosmos—there is no scale or material to which the second law does not apply” [12], including black holes.” As physicist, Carlo Rovelli, puts it in The Order of Time, “It is entropy, not energy, that drives the world” [13], [14].

With respect to the emergence of, or better to, the continued evolution of intelligence as the maximization of future freedom of action, the love of self-similarity throughout the universe, though seemingly abstract and remote from our everyday affairs, in fact bears on our current and future engagement with exponentially advancing technology, especially its state-of-the-art line of attack, AI and its machine learning/deep learning, genetic/evolutionary programming, artificial neural networks (ANNs), self-aware, self-improving software, robotics … spinoffs.

In his book, Our Final Invention: Artificial Intelligence and the End of the Human Era, author and documentary filmmaker, James Barrat, cites the computational neuroscience research of Dartmouth (where AI first got its name in 1956) neuroscientist, Dr. Richard Granger. Though the term is not mentioned, Granger’s AI research uses self-similarity as driving the computational principles by which the brain can be used as a guide for advancing AI. Granger “believes creating [artificial] intelligence has to start with a close study of the brain.” The brain draws on the universe’s love of self-similarity; “just a few kinds of algorithms govern the circuits of the brain. The same core computational systems are used again and again in different sensory and cognitive operations such as hearing and deductive reasoning. Once these operations are recreated in computer software and hardware, perhaps they can be simply duplicated to create modules to simulate different parts of the brain” [15], [16].

The dark side, implied in Barrat’s title and subtitle, of using the self-similarity of brain circuits to advance AI is the über-self-similarity—the second law. Since the second law insists that no matter what happens in the universe, the sum-total of drained possibility, entropy, always increases, if it shrinks in a particular system, or system of systems, as it is so feeding on itself doing in the onrushing juggernaut of AI and company, it must, and it will increase always more in that system’s environment. While intelligence can successfully create goal-directed order out of disorder, that order must and will be entropically compensated.

How does this link to Wissner-Gross’s proposed definition of intelligence? Intelligence represents the most competitively efficient, self-organizing concentration of purpose-driven order out of scattered, uncoordinated, yet potentially organizable matter and energy whose ultimate purpose is to more effectively give the second law what it wants—entropy; in this case, it is entropy displaced into the environment of the rising intelligent system. As ecological thermodynamicist, Eric Schneider, and ecological scientist and policymaker, James Kay (1954–2004) succinctly put it in their article, “Order from disorder: the thermodynamics of complexity in biology,” we have order emerging from disorder in the service of causing even more disorder [17], [18]. The displacement of dissipation, the fundamental modus operandi of Ilya Prigogene’s “dissipative structures” (more on this in a bit), entropically compensates Wissner-Gross’s intelligence as the means to maximize future freedom of action. This has profound implications for our future with advancing technology at its cutting-edge AI research, development, and profit-driven marketing.

Technology and the Co-Evolution of Us?

Richard Yonck cites as examples our own “co-evolution” with technology as the product of maximizing future freedom of action. Our proto-human ancestors developed, exploited, and passed on to future generations the techniques of creating and competitive advantage deploying sharpened stone tools. He writes that edged stone tools date as far back as 3.4 million years ago to our Australopithecus afarensis ancestors whose brains were just a third the size of modern humans:

“The creation and use of edged stone tools was a major milestone in early human history because it sets us on a path that was to change our species forever. Though we are hardly the first animal to use or even fashion tools … human beings are the first, and so far only, species to routinely transform natural resources into tools and machines. In doing so, we ourselves have been transformed …[Though it] is likely they had an intellect only a little greater than that of a chimpanzee …[for] this species to have discovered how to put edge to stone consistently and methodically and pass that knowledge on to later generations is astonishing” [2, p. 77], [19].

The generational passed along deployment of sharpened edge stone tools fulfills in a major way Wissner-Gross’s characterization of intelligence. The tool maximized the future freedom of action of our proto-human ancestors enabling the butchering of carcasses that allowed them to eat new types of food and exploit new territories. Coupled to the evolution of upright walking,4 it sets the stage for a great new range of future opportunities whose exploitation opened doors for further maximization of future freedom of action in the development of ever more complex brains, social networks, and agile bodies. As an agent of power, it served as a competitive edge, literally, and figuratively.

One can see the emergence of ever more efficient forms of intelligence as networked self-similar patterns that are embedded in the universe at its core, driven as they are by the sustained maximization of entropy as a causal force. As a maximizer of future freedom of action, the very existence of gravity can be viewed as a form of embedded, purposeful, goal-directed form of intelligence. Stars evolve to exploit and consume the inherent low entropy existence of hydrogen. Their downstream further collapse, when their hydrogen fuel is used up, fuses and consumes helium then carbon up to the supernova-scale collapse of massive stars, and their spraying into space of the elements needed for the emergence of further freedom of action in the form of life on gravitationally formed planets of future generation stars like our sun.

As a maximizer of future freedom of action, the very existence of gravity can be viewed as a form of embedded, purposeful, goal-directed form of intelligence.

Is it Really Co-Evolution?

The implication of co-evolution is that we homo sapiens and technology advance together as rising intelligence in the universe, a view that meshes with the conventional wisdom on technology. Good technology is good for us.5 The fly in this ointment, the missing link, however, is the second law. The shrinking entropy in techno-intelligence and its feeding of future action enabling spillover into us is being conceptually uncompensated by rising—always greater—entropy in the environment of we co-evolving humans and technology. Since the second law will not allow this, where is the entropy being discharged? What is compensating the exponentially rising organization, radically shrinking entropy in co-evolving technology and humans? The answer, as the second law insists, is the environment. Today, this environment not only includes the usual victims—pollution of air, water,6 and land, climate change, including its catalyzing of wildfires and floods, the sixth extinction of species, the clearcut and burned desecration of rain forests, …—the environment of advancing tech includes us. Just not all of us. Not the insiders symbiotically rising with the products and insights and access to power feeding order (money) that they bring about and exploit. Their driven, specialized, outlier intelligence is maximizing their freedom of future self-serving action. The entropic compensation is the rest of humanity whose freedom of action, captured by the deeply exploited draw of convenience coupled to the ad-hyped illusion of power, is being increasingly trapped by design.7

The implication of co-evolution is that we homo sapiens and technology advance together as rising intelligence in the universe.

Shoshana Zuboff captures the gist in her new book, The Age of Surveillance Capitalism, writing that we are the “objects of a technologically advanced and increasingly inescapable raw material extraction operation” [26, p. 10]. We and our offspring are the raw material being extracted. She solidifies her claim by citing Facebook’s (morphed into “Meta”) marketing director who “openly boasts that its precision tools craft a medium in which users ‘never have to look away,”’ and adds the caveat that “the corporation has been far more circumspect about the design practices that eventually make users, especially young users, incapable of looking away” [26, p. 453]. Or as Wissner-Gross might put it, by corporate design, their intelligence gets trapped. Their inability to look away also sets up the perfect storm of authoritarian state surveillance feeding on Big Tech enabled surveillance crushing future freedom of action as the cutting-edge merger of George Orwell’s 1984 with Aldous Huxley’s Brave New World. In her WIRED article, “The complicated truth about China’s social credit system,” Nicole Kobie writes that “China’s social credit system has been compared to Black Mirror, Big Brother and every other dystopian future sci-fi writers can think up. The reality is more complicated—and in some ways, worse” [27].

In Addiction by Design, Natasha Dow Schüll fastens on the dependency trap captured in her book’s title and her in-depth focus on the designed to hook machinations of Las Vegas casinos [28]. But freedom of future action crippling addiction by design in Vegas does not just stay in Vegas. It self-similarly branches from physical slot machines to smartphone/social media apps slot machines, to our crucial GPS driving companion, to our rolling computer on wheels called a car with 100 million lines of doing it all for us, intelligence trapping code [29].

Forwarding that goal is B. J. Fogg, founder of the Stanford Persuasive Technology Laboratory. Its aim is to alter human thoughts and behaviors via digital machines and apps. According to Fogg, “We can now create machines that can change what people think and what people do, and the machines can do that autonomously … social-media apps plumb one of our deepest wells of motivation” [30], [31].

The dissipative mental, physical, and social consequences of Fogg’s persuasive design technology are given further weight by Brett Frischmann and Evan Selinger in their book, Re-Engineering Humanity who claim “[they’re] not interested in the engineering of intelligent machines; [they’re] interested in the engineering of unintelligent humans” [32, p. 12]. According to Frischmann and Selinger, “we’re being sold a misleading vision of cyberservants and digital assistants. These tools don’t just do our bidding. They’re also smart enough to get us to do theirs … We are being conditioned to obey. More precisely, we’re being conditioned to want to obey.” As one commenter, C. V. Danes, wrote on the New York Times article, “QR codes are here to stay. So is the tracking they allow,” “I don’t think even George Orwell was cynical enough to foresee how readily people would embrace the surveillance society” [32, pp. 4–6], [33]–[34][35].

“We are being conditioned to obey. More precisely, we’re being conditioned to want to obey.”

Intelligence as a Dissipative Structure

The term “dissipative structure” was seeded in the early 1920s by U.S. mathematician, physical chemist, and statistician, Alfred J. Lotka. It became most closely linked to Nobel Prize winning physical chemist, Ilya Prigogine, and his colleagues at the Brussels School. According to Prigogine, dissipative structures can grow increasingly complex over time by exporting dissipating entropy. Concentrating organization in a system, say, a living organism, or an ecosystem, or AI, shrinks entropy in that system. By allowing ordered, powerful, structured energy to concentrate one place, while displacing spread-out, dissipated-energy someplace else, the second law more effectively, more efficiently, produces what it wants, more entropy, faster.8 As “A deli-cate balance: technics, culture, and consequences,” IEEE conference contributor, Rod Swenson, put it:

[If] ordered flow produces entropy faster than disordered flow … and if the world acts to minimize potentials at the fastest rate given the constraints … then the world can be expected to produce order whenever it gets the chance … The world, in short, is in the order production business because ordered flow [as intelligence] produces entropy faster than disordered flow [36]–[37][38].

“What if humanity’s capacity to cooperate has been undone by the very technology we thought would bring us all together?”

The way this happens can be seen on the level of physical and chemical phenomena, including house demolishing tornadoes, community demolishing hurricanes, the Bénard Cell phenomenon,9 autocatalytic chemical reactions like the Belousov–Zhabotinski (BZ) color flipping chemical clock, lasers, even the tornado in a bottle demo [42]. It can also be seen in the frame of evolving intelligence as Wissner-Gross defined it. The maximizing of future freedom of action powers technical advance as a dissipative structure that more and more effectively concentrates organized matter and energy into the “accelerating returns” of technology as inventor, futurist, and currently a director of engineering at Google, Kurzweil calls it, with outlier smart human help for now [43].

Erik Verlinde’s tie-in of gravity to information coupled to its maximizing of entropy carries on as the continued evolution of Wissner-Gross’s characterization of intelligence as evolving life on earth and its molding of the biosphere to more effectively suit its accelerating needs—the so-called “Gaia” hypothesis. It continues to climb the power ladder, faster and faster, in proto-human and human history and its conceptual commandeering of disordered order onto evolving self-similar freedom of action tributaries enabled by upright walking, the sharpened edge stone, the spear, the harnessing of fire coupled to the draw of aroma and flavor of cooked food [44], the evolution of language, the freeing up of ever more specialized niches, military and civilian, thanks to the agricultural revolution, the Renaissance, science, the industrial revolution, on up the near vertical rising exponential of human initiated AI; in this multi-billion-year trajectory, the self-similar umbrella pattern of the second law is at work organizing the more and more efficient, ordered flow production of entropy throughout the sweep of evolving intelligence.

Modus Operandi

Realize it or not, gradients—differences that can make a difference—are what we are. As an algorithm for the creation of power churning order, technology can be viewed in the frame of gradients. With respect to we humans, that power is both sustained and grown by ever smarter feeding on extractable gradients in our environment and deploying the acquired organized, purposefully transformable matter and energy in the building and maintaining of internal power enabling structures of brain, body, and relationship through mental, physical, and social exertion. Effort maintains and raises inner human gradients. When a technology comes along eliminating the need for human effort, pandering to the principle of least effort [45]–[46][47][48], the deep-seated urge to minimize the built-in discomfort—pain—of calorie consuming exertion that once made survival sense in a world sans food-on-demand, what it is in effect doing is degrading the exertion generated difference that makes a difference. Thanks to increasing reliance on technology to do more and more of the work, the advancing sum of losing it for not using it exertion removal is weakening our mental, physical, and social muscles by rendering their exercise unnecessary; it enfeebles us through increasing loss of the need to use, rendering us ever more dependent, collectively and individually ever more deeply in the grip of addiction by design to the promise of convenience, the intelligence trapping dependency that consumes, the illusion that sells and hooks [47], [49], [50].

Degrading our Gradients: Self-Similarity at Work

Schneider and Kay propose that not only can gradient degrading order spontaneously emerge from disorder in physical or chemical systems, networked, self-similar, self-organizing processes apply on up to living organisms, to ecosystems, and we contend, state-of-the-art technical advance as a leading edge of evolving intelligence. In a nutshell, the technical order creates an order in one’s life that eliminates the need to produce an equivalent order in one’s brain, body, or social engagement skill. More than this, it dissipates an existing inner order, or more significantly, eliminates its development in children through the loss of the need to use [22]. The issue, however, is not one particular hi-tech system like GPS taking over the hippocampus cognitive map enabling mental navigation mental work for us. It is the doing it all for us technology taking over more and more of the work, faster and faster [47].


In Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought, political theorist, Langdon Winner, believes Mary Shelley’s quotation from Paradise Lost10 at the opening of her prescient novel, Frankenstein, captures the problem’s essence as “the plight of things that have been created but not in a context of sufficient care” [51]. The plight of Victor Frankenstein, Winner argues, is the plight of us, that his problems “have now become those of a whole culture.” Frankenstein discovers but refuses to ponder implications [51], [52]. Thanks to “a pervasive ignorance and refusal to know, irresponsibility, and blind faith … toward the technical … powerful changes [are being released] into the world with cavalier disregard for consequences” [52, p. 314]. We “‘use’ apparatus, technique, and organization with no attention to the ways in which these ‘tools’ unexpectedly rearrange” us. We “participate without second thought in megatechnical systems far beyond [our] comprehension or control.” We “endlessly proliferate technological forms of life that isolate people from each other and cripple rather than enrich the human potential” [52]. We “stand idly by while vast technical systems” ever more cleverly reverse adapt human ends “to match the character of the [increasingly] available means” [52, p. 229], [49].

And so Here We Are

Artificial machine intelligent software and hardware have finally rounded the exponential bend upward to “smart enough to do the job themselves.” When more and more11 lose their jobs to technical systems, those systems will be discharging entropy into their lives as they lose access to life sustaining order, the differences that can make a difference called jobs, money, food, housing, opportunity, fulfillment, meaning, power, health, family, community … crushing their future freedom of action.

And even if the hardware and software systems do most of the work that needs doing, freeing us to do whatever we want with our unemployed leisure, the loss of the need for human mental, physical, and social exertion in AI swept away jobs will see to it that the future will be a rapidly rising world of targeted human dissipation. Serving as reverse adapted, “conditioned to want to obey,” entropic effluent recipients of dopamine-fueled, Skinner box-designed, “objects of a technologically advanced and increasingly inescapable raw material extraction operation,” is not smart.

Backstage of our affairs with technology, on all scales, from up close and personal, physical, mental, and social, to our collective impacts on the biosphere from which we extract resources and discharge wastes is the proposed reality that nature really does, by all means at its accelerating disposal, go all out to maximize the dissolution of gradients. In service of that all-consuming goal, the second law of thermodynamics allows, indeed wants order to rise-up, to concentrate, to make targeted differences in driven systems that more effectively and efficiently increase what it wants—success in the quashing of all differences in the environments of those systems that are making or can make a difference. Recognizing that advancing technology is the leading edge of the thermodynamic imperative to dissipate differences that can make a difference that do not exclude us is a first step en route to corralling the juggernaut.

The key to not allowing ourselves serve as a convenient sink for technology’s vested interest-driven entropic effluents is to engage our lazy but vital critical brain intelligence (system 2) that engenders the means to resist the knee-jerk urge to let the technology do more and more of the work (system 1) [57]. Engaging hey-wait-a-minute sets the stage for injecting friction into the whirling gears of the untethered machine. Removing the smartphone from our hand, Alexa from a first grader who knows his arithmetic but nonetheless asks her “what’s five minus three,” thanking her after she supplies the answer [49], creates a space for critical thinking to ask what all the apps doing it all for us are also doing to us, and to our children.12

Making the effort to critically think about what hooks and dissipates, this techno-media literacy must also include the caveat that this is not what autonomous, persuasively designed “machines that can change what [we] think and what [we] do, as ‘objects of a technologically advanced and increasingly inescapable raw material extraction operation,”’ wants. Nor is it the path of least effort, of seamless convenient shortcuts to painlessly, on demand, getting what we want, when we want it. Corporations, their investors, their executives, driven by the goal of maximizing profit, will go all out to block the incursion of backstage, sales threatening, system to critical thinking, especially in the young, in the schools, in the home. “Technopoly—the submission of all forms of cultural life to the sovereignty of technique and technology,” as media critic, Neil Postman (1931–2003) defined it [62], does not want the emergence of an AI Greta Thunberg.

While in the largest scheme, we, all of life, extending even to non-life as the self-similar upward thrust of intelligence, including the existence of gravity as a universal means for realizing the maximizing of future freedom of action, share a oneness, under the thumb of competitiveness, it is us against them. In the race to gain an innovative edge in advancing technology, proposing that “we” should do this, we should do that to avoid the looming threat of technics out of control, will not work. Competition drives our future with “the accelerating returns” of advancing AI out of human control. The dilemma is there is no “we,” but there has to be [63], [64].

In his New York Times Opinion Column, “Maybe humans can no longer get along,” former Technology, now Opinion columnist, Farhad Manjoo, asks this: “What if humanity’s capacity to cooperate has been undone by the very technology we thought would bring us all together? [While the] internet didn’t start the fire … it’s undeniable that it has fostered a sour and fragmented global polity—an atmosphere of pervasive mistrust, corroding institutions and a collective retreat into the comforting bosom of confirmation bias. All of this has undermined our greatest trick: doing good things together” [65], [66].

But it is not intelligence per se setting up barriers to collectively “doing good things together,” it is the blindered intelligence-driven, all out race to gain an advantage and capitalize on that advantage, that enables “the plight of things that have been created but not in a context of sufficient care” at the parasitic expense of the many who, under the illusion of doing more and more for them, are bit by bit, qubit by qubit, skill by skill, job by job, losing it for not using it. When Alexa does the arithmetic, the child’s brain does not. When GPS does the navigating, the brain’s hippocampus does not. When the algorithm does the job, the human does not.


Citing Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies, Richard Yonck fastens on what we humans might tie superintelligent AI to someone with psychotic obsessiveness and laser-like attention. Countering the argument that we could just include an OFF switch if the feeding on itself superintelligence threatens our human future, he writes that:

Unfortunately, such responses completely misunderstand and underestimate the nature of the threat we are creating. … This will be an intelligence that is on a par with or even far exceeds our own. This fact raises considerations and issues that need to be addressed before we move forward along the path to this technological singularity [2, p. 214], [67].

As a form of intelligence, the AI will possess what is known in economics as a “utility function,” its raison d’être. It will attempt to maximize its future freedom of action by optimizing itself. According to computer scientist, Stephen Omohundro, whose beyond-the-norm concern includes the social implications of self-improving AI, maximizing its utility function leads to four primary drives: “efficiency, self-preservation, resource acquisition, and creativity. Exercised correctly, each of these, either by itself or in concert with the other drives, can be used to more efficiently fulfill a system’s primary purpose” [67, pp. 214–215], [68].

The four drives, Omohundro claims maximizes a technology’s utility function, are really no different from what drives the evolution of intelligence. They also drive inequality, winners and losers, cumulative advantage—the Matthew effect—as those who possess and capitalize on their advantage take advantage [69]. In pursuit of maximizing technology’s own future freedom of action, Richard Yonck, citing what Nick Bostrom called the “malignant failure mode” writes that “we really need to ensure that future AI is designed to remain as aligned with human values as possible. The alignment problem will require guidance from us and it will be the guidance of a worthy intelligence, which is where intelligence augmentation comes in” [2, p. 240], [49], [70].

But what are these human values? Are they the values of ransomware hackers? Who constitutes the “us” with “worthy intelligence” who will guide the AI? Are they the technical complexity consumed technologists? And who will be the recipients of intelligence augmentation? Will they be the rich getting richer who can afford to be augmented at the inequality exacerbating expense of the many who cannot?

If Omohundro’s four primary drives, in fact, represent the ongoing maximizing of AI’s utility function, the continuing advance of AI will enable its future freedom of action as artificial general intelligence (AGI) and then artificial superintelligence (ASI); given the ongoing all-out global race for winner take all AI spoils and threats trajectory, there’s no guarantee that AI superintelligence will not self-similarly carry on what is going on right now, only far more efficiently, ensuring its own self-preservation by exponentially more intelligent acquiring of resources that, as today, includes the, incapable of looking away, ongoing raw material extraction of inner human mental, physical, and social skill resources.

As for “the plight of things that have been created but not in a context of sufficient care” in the matter of rapidly advancing AI and its out-of-human-control potential, James Barrat writes this:

Of the AI researchers I’ve spoken with whose stated goal is to achieve AGI, all are aware of the problem of runaway AI. But none, except Omohundro, have spent concerted time addressing it. Some have even gone so far as to say they don’t know why they don’t think about it when they know they should. But it’s easy to see why not. The technology is fascinating. The advances are real. The problems seem remote. The pursuit can be profitable, and may someday be wildly so …“Let someone else worry about runaway AI textasciimacron I’m just trying to build robots” [15, pp. 234–235].

Advancing technology carries on Schneider and Kay’s order emerging from disorder in the service of causing even more disorder. The “accelerating returns [to itself]” of the vertical rising exponential has, with ever escalating speed, twisted co-evolution into techno-evolution/human-devolution. The prospect of untethered ASI, maximizing its own future freedom of action at entropic human expense, not only far exceeding human intelligence, but actively devolving human intelligence, elite human intelligence downstream included, is setting we, the human species, up for checkmate. This is not smart.

At the close of Superintelligence Nick Bostrom expressed his concern like this:

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound [71].


Author Information

Jeff Robbins was with Rutgers University, New Brunswick, NJ, USA.


The complete version of this article, including references and footnotes, is available to SSIT member/subscribers here.