by Robert W. Gehl and Sean T. Lawson (Cambridge, MA, USA: MIT Press, 2022, 344 pp.)
Reviewed by Nathaniel Knopf
Spend enough time online and you will inevitably come across a political debate. Within these debates, you can count on accusations that some participants are actually bots. Though the profiles appear to belong to ordinary people, the accusation is they may actually be fake profiles created to spread propaganda and mislead people.
The accusation is difficult to prove, but the suspicion is based on real events. Back in 2016, acting on behalf of the Russian government, the Russia-based Internet Research Agency (IRA) attempted to interfere in the U.S. Presidential election. Actors at the IRA managed fake social media accounts which, backed up by an army of autonomous bots, posed as Americans espousing pro-Trump and anti-Clinton views.
Gehl and Lawson argue that “masspersonal” social engineers use the same techniques as earlier social engineers to great effect on individual and societal levels.
This was not the only online interference in that election cycle. The English firm Cambridge Analytica used deceptive means to gather Facebook data on 87 million Americans without their knowledge and then used this data to create psychological “profiles.” Working with the Trump campaign, Cambridge Analytica then created “psychologically targeted” ads meant to sway potential voters. It is hard to say how effective either operation was in influencing public opinion. The events themselves, however, are well established. In Social Engineering, authors Gehl and Lawson frame these operations as examples of “masspersonal social engineering,” their framework for manipulative online communication with roots in social engineering.
Masspersonal social engineering has four essential ingredients: 1) “trashing,” which amounts to snooping on one’s target to gather data; 2) “pretexting,” taking on a new identity to gain credibility; 3) “bullshitting,” a mix of appearing informed, disregarding the truth, and being friendly in order to get on the target’s good side; and 4) “penetration,” ultimately getting the target to do or think something they otherwise would not.
To define this framework, the authors draw from two schools of social engineering. First, the mass social engineers of the 20th century who made use of newspapers, radio, and television to create what is now public relations. Second, the interpersonal social engineers of phone and computer hacking who exploit human weaknesses to break into computer systems and buildings. The authors then tackle each of the four pillars—trashing, pretexting, bullshitting, and penetration—and how they apply specifically to mass and interpersonal social engineering. Gehl and Lawson argue that “masspersonal” social engineers use the same techniques as these earlier social engineers to great effect on individual and societal levels, all made possible by “the unique affordances of the Internet and social media platforms.” Masspersonal social engineering, they claim, not only describes the strategies of the IRA and Cambridge Analytica, but also presents a new, unique cause for societal concern. Unfortunately, what the book fails to capture is perhaps more important: what allows for rampant online manipulation, and how we can release its hold over us.
Let us start with what the book gets right. As far as tools to describe online manipulation, this book is a decent place to start. The provided framework for masspersonal social engineering highlights the steps necessary to create what is essentially immersive online propaganda. In doing so, it gives one the tools to compare instances of online manipulation and to distill what the bad actors are trying to do. The book takes a roundabout path to deliver the framework—it is not until page 165 of the 225-page book that the authors turn from the earlier social engineers, leaving time to discuss only two examples of masspersonal social engineering. But the lengthy history does fulfill the authors’ promise to connect masspersonal social engineering to its ancestors, and various digressions on the toxic, hypersexualized language used by computer hackers and garbage as an overlooked byproduct of knowledge production may be of interest to some.
The provided framework for masspersonal social engineering highlights the steps necessary to create what is essentially immersive online propaganda.
In the last chapters, to showcase masspersonal social engineering, the authors apply it to the IRA and Cambridge Analytica. The framing for the IRA’s actions is cohesive enough, but the application to Cambridge Analytica raises some potential issues with the four pillars of their framework. The authors treat these four pillars—trashing, pretexting, bullshitting, and penetrating—as essential ingredients of masspersonal social engineering. For Cambridge Analytica, however, they manipulate the functions of these pillars to better fit the framework to the event. The authors recap how Cambridge Analytica captured the data of 87 million Facebook users (i.e., “trashing”) using innocuous appearing games and web applications—strategies which the authors describe as “pretexting.” This use of pretexting to facilitate trashing is different from the function of pretexting elsewhere in the book, which is typically to gain credibility with the target during active manipulation. This malleability of the four pillars suggests that the framework as a whole might not be so precise. That said, one can imagine applying it to examples not discussed in the book, such as guerrilla marketing campaigns, so the concept probably has broader use.
The book presents a reasonable framework for discussing online manipulation, but the authors go one step further than presenting a new framework. Gehl and Lawson claim that masspersonal social engineering presents a unique threat to society. The events highlighted by the book are certainly representative of a great disinformation crisis facing our nation. But masspersonal social engineering fails to fully capture and address the core, systemic problems contributing to disinformation, instead focusing on individual behaviors. Specifically, the authors fail to address the unique, critical role that social media companies like Facebook play in making disinformation schemes possible.
The authors mention that Facebook is incentivized to sell advertising space to anyone with the cash, and they correctly call for stronger data protection laws in the United States to match those in Europe. But the real reason behind events like these is not that bad actors can advertise on Facebook or that Facebook makes personal data more accessible; addressing these problems alone would be too superficial to prevent manipulation. To explain why, it is necessary to illustrate how social media companies have intentionally created platforms that actively spread disinformation, and why they will continue to do so without major outside regulation.
Social media companies have intentionally created platforms that actively spread disinformation.
Social media websites such as Facebook make use of machine-learning models to decide what content should be served to users. In order to maximize user engagement, and therefore profits, Facebook’s models are specifically configured to spread polarizing, inflammatory, and often misleading content like wildfire. In an expose on Facebook’s machine-learning models, Karen Hao of the MIT Technology Review confirmed exactly this:
“A former Facebook AI researcher who joined in 2018 says he and his team conducted ‘study after study’ confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. ‘Over time they measurably become more polarized,’ he says” .
Hao also describes internal researchers making shocking discoveries, such as Facebook’s own recommendation tools being responsible for 64% of all extremist group joins on the platform. Attempts to limit this were dismissed internally as being “antigrowth.” Task forces meant to assess the relationship between political polarization and engagement were disbanded. To this end, Facebook has chosen to assume a crucial, symbiotic role in disinformation campaigns. Bad actors publish lies and manipulative content, which Facebook preferentially spreads, generating profits along the way.
We saw the consequences of this symbiosis on 6 January 2021. Facebook’s own algorithms contributed to the creation of an army of conspiracists who believed that deep-state child predators had thrown the election to force Trump out of office . They were willing to commit treason for their cause. To Meta, Facebook’s parent company, this is just an externality of a wildly profitable strategy.
There is always the chance that with enough negative attention, companies might begin self-regulating to protect their image. However, given Facebook’s pitiful attempts at self-regulation, which saw the deactivation of a model that filtered antivaccine misinformation in the name of “fairness” to conservatives , I think it is unlikely that self-regulation would do much good. It certainly has the potential to leave us open to masspersonal social engineering and other harm.
Facebook’s models are specifically configured to spread polarizing, inflammatory, and often misleading content like wildfire.
So what can we do to protect our society against disinformation? A good place to start would be limiting how large and powerful these social media platforms can get. The sort of profit-seeking behavior that leaves the door open for disinformation and masspersonal social engineering is not unique to Facebook. If Facebook is correct in its hypothesis, that feeding users polarizing content is the best way to increase engagement, then the problem generalizes to any social media company that makes more money with increased engagement—basically all of them.
In the book’s conclusion, Gehl and Lawson give this phenomenon a nod, labeling it “the political economy of bullshit.” However, the solution they propose falls short. Their antidote is improved media literacy—to ask people to think critically and parse out bullshit they see online. I do not wish to bash critical thinking, and our country’s lack of skill certainly bears some of the blame for how easily conspiracy theories can spread. However, the political economy of bullshit is a systemic problem, and systemic problems require systemic solutions. Whether the problem is disinformation, climate change, or a pandemic, relying on individual responsibility as a solution all but guarantees failure. So, the big issue that Gehl and Lawson fail to address is this: Anything relying on mass manipulation, be it election interference, antivaccination conspiracies, or future insurrections, is more possible in a world where social media companies are allowed to exist in their present form.
A good place to start would be limiting how large and powerful these social media platforms can get.
If masspersonal social engineering is really the societal threat that Gehl and Lawson say it is, then we need to think harder about the systemic causes that make it possible. Fortunately, there are steps that our society can take to limit the power and scope of platforms like Facebook, thereby reducing the opportunity for disinformation campaigns and masspersonal social engineering. We do not have to allow corporations like Meta to grow so unfathomably large. Contrary to what Meta and other social media companies would like us to believe, we have the power and the means to regulate their behavior. We can interrupt this trend of radicalizing people for profit, and, with any luck, protect our society from mass delusion and subsequent destruction.
Nathaniel Knopf is a medical student at the Keck School of Medicine of the University of Southern California, Los Angeles, CA 90033 USA. Email: email@example.com.
To view the original article click HERE.