Deepfake Videos and DDoS Attacks (Deliberate Denial of Satire)

By on December 15th, 2019 in Editorial & Opinion, Entries, Ethics, Human Impacts, Magazine Articles, Societal Impact

During the Second World War, in 1943, the Japanese authorities in China interned a couple of thousand “enemy nationals” from “hostile countries” (mostly America and Europe), in an isolated compound in the province of Shantung, (hence the Shantung Compound [1]). Unexpectedly, the internees were supposed to self-organize themselves, so in addition to taking care of basic necessities (e.g., food, warmth, and shelter), the compound occupants had to devise their own political and organizational arrangements. At the start, and relatively quickly, a hierarchy of power emerged as a few men (it was always men) attained a subtle but tangible dominance, as they attempted to replicate the same status and prestige that they had enjoyed prior to their internment.

Satire has been used proactively throughout human history as a means of mocking the pomposity and pretension of others.

However, this self-organization was disrupted by a minimal intervention from the “external authority,” coming in the form of a requirement to constitute nine four-man committees to represent the entire compound’s concerns, including Labor, Education, Supplies, Quarters, and one named “General Affairs.” The chair of each committee was to form a nine-person group which would interface with the Japanese commander. It was assumed that “General Affairs” was the most prominent committee, so that the chair of that committee would be, de facto, the primus inter pares (first among equals), and so by extension the leader of the entire compound. Consequently, the most domineering men jockeyed first for a position on this committee, and subsequently among themselves for the role of its chairman.

The winner of this power struggle was mortified, though, to find that the word “Miscellaneous” had been mistranslated as “General,” and that far from being the overarching committee for general compound policy, it was concerned only with a medley of minor issues, each of which was not considered sufficiently important to warrant a committee of its own. But the compound’s response of universal derision served to diminish, and then completely demolish, the political pretensions of the person appointed as (or who had triumphed in getting himself appointed as) chairman:

When this coveted prize, over which our giants had fought, turned out to be miniscule, the camp hooted with derisive delight. … the victor, was not merely embarrassed, but downright sulky about it. He promptly announced his resignation, indicating that now that he understood what the job involved, he saw that it was too small for a man of his stature. At this the camp hooted once more; [the victor] never acquired political prominence again [1, p. 33].

Thus was a supposedly powerful man, aiming to the most powerful, even in such a meager context, brought low: by and to the ringing sound of derisive laughter.

And so, while the Shantung compound could provide a basis for studying the evolution of an institution, to see if it satisfied Ostrom’s institutional design principles for sustainability [2]; or even for an approximation of Ober’s thought experiment for Demopolis [3] (although the compound’s occupants were a normal distribution of the population, not just those preferring to avoid tyranny — and yet they did), this single, small incident raises some important questions in a contemporary context about the social mechanism of expressing civic disapproval through the medium of satire, and the technological development of deepfake videos which threatens to subvert the subversive nature of satire itself.


The derisive, mocking laughter that accompanied the Shantung Compound’s would-be leader’s unfortunate fall from influence and political power (if not grace) was, presumably, motivated by schadenfreude (one person’s pleasure derived from the misfortune of another), a pleasure made all the more amusing by the irony that the man should have been the unwitting author of his own misfortune.

If what is fake can be made to be seen as real, then what is real can be made to be seen as fake.

However, satire has been used proactively throughout human history (or at least since Roman times when the rhetorician Quintillian (c. A.D.35 to c. A.D.100) first deployed the neologism, although the linguistic/humorous device featured heavily in the writings of Horace and Juvenal), as a means of mocking the pomposity, pretension, and superior aspiration of others, without waiting for misfortune to fall upon them. Indeed, satire appears to have been common across both time and cultures, and the English language features many literary exponents, including (in the U.K.) authors such as Pope, Swift, Dickens, Huxley, and Orwell; while in the U.S., Heller’s Catch-22, Chaplin’s The Great Dictator, and Kubrick’s Dr Strangelove, and Carlin’s comic monologues demonstrate that satire can be delivered through literature, film, and stage performance. In contemporary times, satire is well-represented though alternative comedy, graphic cartoons, and in particular, in the U.K., the magazine Private Eye (

In particular, satire has provided an essential social and psychological function: there are arguments that satire acts as a social leveler, satisfying a popular need to ridicule, if necessary, leading figures in politics, arts, or religion when their policies, ideas, or opinions (especially of themselves) are wildly out of touch or grossly inflated. Furthermore, it can serve as a useful tool for critique, holding up the proverbial “mirror to society,” challenging citizens to reflect on their values and, perhaps, prejudices, while pressurizing administrators to justify or amend their policies. In the latter respect it is a form of dissent, which allows “allows critics to expose inconsistencies between core values and current practices” [4, p. 4]. Indeed, in some national constitutions it is specifically marked out as a legitimate form of dissent [5].

Deepfake Videos

The early pioneers of photography were not averse to distorting and manipulating images [6], and yet there is an old adage that “the camera does not lie,” even though a camera lens rarely (if ever) reproduces a scene in the same way that a human eye perceives it. Nevertheless, despite the fact that the noun “photoshop” made the transition to verb, such was the ease and ubiquity of the process, there remains a tendency to believe or trust a photograph, and by extension, a video. This tendency would, at least partially, explain the rise of social media influencers and their successful exploitation of video on YouTube channels and Facebook, despite the fact the content is often sampled, distorted, or manipulated, or has financial motivations that are not openly revealed.

However, recent advances in artificial intelligence and the continually decreasing cost of increasing computer power have made it possible to develop software that can “do” to video what photoshop could “do” to photographs. The advance in artificial intelligence is based on a class of machine learning algorithm called a generative adversarial network (GAN) [7]. In a GAN, two artificial neural networks compete with each other in a “game,” where one network, the generative network, generates data that is evaluated by the second network, the discriminative network. The discriminative network is trained on a known dataset until it achieves acceptable accuracy. Then, the generative network learns to generate new data with the same statistical characteristics as the original (training) data, depending on whether or not the generated data succeeds in being classified as “real” by the discriminative network. The outcome of the discriminative network’s decision is used to fine tune the training of both networks. Eventually, both networks will plateau, in the sense that they cannot improve any further, at which point, in the preferred outcome, the discriminative network is unable to differentiate between real data and synthetic data produced by the generative network (the synthetic data are sufficiently “realistic” to be indistinguishable from real data).

While GAN have already achieved significant beneficial results, a potentially more insidious application is the use of GAN to generate deepfake 1 videos. The deepfake technique of image synthesis uses a GAN to superimpose existing images and videos of human subjects onto corresponding source images or videos, so that not only is the discriminative network unable to distinguish between real and synthetic data, but neither can a human observer.

Disturbingly, but not altogether unsurprisingly, deepfake pornographic videos have been made, where images of famous people’s faces have been superimposed over the original actors and actresses (depressingly, it is nearly always the actresses). The software tools have also been made more readily available, such that with access to a sufficiently powerful computer and enough data about the target, it is possible for such videos to be routinely made by “amateur” developers about less-famous people. For some, this involves (for Freud-knows-what reason) superimposing themselves on the video; for others, it involves superimposing another person (e.g., for reasons of “revenge”), which can be very difficult to counteract, and very damaging to personal and professional reputations.

However, distressing as this (mis)use of the technology is, a number of writers have argued that democracy itself is under (yet another) threat from deepfake videos.

The Potential Damage to Democracy

It has been argued that deepfake videos present a number of possible threats to politics, politicians and democratic political regimes [8]. Firstly, deepfake videos could be used to create compromising material of politicians: for example, the digitally-altered video2 of U.S. House of Representatives speaker Nancy Pelosi appearing to slur drunkenly was viewed millions of times and tweeted by the U.S. President, and although the video is demonstrably a hoax, the tweet remains undeleted. Secondly, as has already been demonstrated, on mainstream media, where fact checking is secondary to the pressures of “churnalism” [9] and is ruthlessly exploited by the merchants of doubt [10], misinformation can be even more rapidly spread by social media, where veracity is a poor third to the willingness to believe and the desire for likes, shares, and retweets. Thirdly, whatever national laws are passed do nothing to prevent transnational interference, whether it is simply a financial incentive based on “clickbait” or the willful subversion of a foreign power. Timing here is everything: the release of deepfake videos (like the release of fake news in the last few weeks of the U.K. Brexit referendum in 2016) could flip an election before the debunking catches up. Finally, the technology could even be used for “false flag” operations: make and release a fake video, then blame (and so discredit) the political opposition for so doing.

After the fall of the communist regimes and the end of the cold war, Fukuyama [11] proposed the “end of history,” that globally nation states would generally be governed and organized around the principles of liberal democracy. While that outcome is yet to materialize, given the opportunity, most people would prefer to live in a liberal democracy. But the progressive values of a liberal democracy include tolerance, inclusivity, environmentalism, social justice, etc., rather than tax cuts and self-enrichment. In such a context, the only way for the political right to succeed is to be both utterly shameless and completely reckless. However, a side effect of such recklessness is that satire becomes a relatively ineffective tool of critique and dissent: it is very difficult to ridicule someone when they themselves propose policies or make statements that are more outlandish than the imagination of a satirical sketch.

Furthermore, this is where the baleful influence of deepfake videos becomes most apparent. If what is fake can be made to be seen as real, then the unfortunate corollary is: what is real can be made to be seen as fake. Consequently, any real announcement of some ludicrous or irresponsible new policy, which is jettisoned on its first brush with reality, could be dismissed, by those who were determined to do so, as a deepfake; as the poet nearly said: with confirmation bias, the gods themselves contend in vain. This psychological disposition, in conjunction with a lack of accountability, could reduce political discourse to the level of a playground “he said, she said” exchange, i.e., a pointless dispute over facts, not a constructive negotiation over evidence-based policies for prioritizing shared values based on an agreement of the facts [12].

The recklessness and shamelessness that diminishes the potency of satire as a tool of political dissent has become so widespread, and so commonplace, that it could almost be seen as a willful, intentional strategy: aided and abetted by misuse of deepfake videos, the strategy amounts to an attack on civic discourse that could be reasonably termed the deliberate denial of satire (DDoS).

Defense Against DDoS

How can satire be reclaimed in such a context? It is by no means clear that educational programs that encourage citizens to be cynical about everything they see on the Internet would be so helpful: there is a difference between critical thinking and automatic gainsaying. The erosion of public trust that follows from individual doubt could lead to each citizen having a selective, personal interpretation of “reality,” and there being no intrasubjective reality. Yet such a lack of common knowledge diminishes opportunities for successful collective action [4].

Similarly, an entirely technological solution will not provide a panacea by itself: although computational solutions to provenance and tampering are possible [13], as with cybersecurity, there will be an ongoing “algorithmic arms race” between the fakers and those trying to expose the fakes [8].

A legal solution would have some potency, but a purely legal solution runs the risk of unintended consequences and creating a new class of criminal offense out of legal activities. Image manipulation is as old as time, and as discussed, using such images for satirical purposes serves an important civic function. Trying to discriminate between “legitimate” satire and deepfake malfeasance is, perhaps, more likely to lead to legislation enforcing a blanket prohibition, which would, of course, be the preferred position for satire-denialists.

While educational, technological, and legal routes do offer partial solutions, ultimately, any defense has to include political will. The responsibility rests firstly with those would seek public office to disavow the use of deepfake videos; and secondly, with an informed electorate who would have nothing to do with the kind of shameless power-grabbing charlatans who will not accept citizens using satire to speak truth unto their power, nor will they speak truth unto their own citizens.

Author Information

Jeremy Pitt is Professor of Intel-ligent and Self-Organizing Systems at Imperial College London, U.K. Email:

Jeremy Pitt

Jeremy Pitt

To view the full version of this article, including references, click HERE.