Social Media Enables Undue Influence

28 min read

Some of our most popular technologies are becoming a means of mass coercion that open societies cannot survive.

For over a century, psychologists have known the power of coercion to direct behavior. Under certain conditions, people can be manipulated into thinking and behaving in ways that override their critical faculties and personal choice—the mind can be “hacked.”[1] Today we are living in a new era of propaganda and psychological coercion emerging from the intersection of behavioral science, neuroscience, data science, and artificial intelligence. The result is a problem of world historical significance. In this paper, the fourth in our series on propaganda, we describe how a small group of technologists has created machines with the potential to systematically undermine individual autonomy—and thereby the sovereignty of democratic nation states.

It has been more than a decade since “persuasive technologies” and “nudging” were first advanced as benign forms of social control. The U.S. and UK governments established “Nudge Units” that applied behavioral sciences in diverse contexts to influence behavior at scale.[2] At the same time, companies such as Google and Facebook were exploring nearly identical ideas in search of profit through attention capture and behavior modification, all in pursuit of advertisement revenues.[3] The alchemical wedding between psychology and digital technology has resulted in a new kind of social reality in which advanced techniques of coercion are a common part of everyday life.

…according to technical legal definitions provided by psychologists, so-called "persuasive technologies''—especially social media—have crossed the threshold from persuasion to coercion.

"Undue influence" is a legal term used to describe situations more commonly referred to as "mind control" or "brainwashing."[4] When individuals are under the spell of truly coercive communication, they are not freely choosing their actions and beliefs. We argue here that according to technical legal definitions provided by psychologists, so-called "persuasive technologies''—especially social media—have crossed the threshold from persuasion to coercion. Large segments of the population are subjected to undue influence over their minds and behaviors.

The histories of propaganda reviewed here and elsewhere[5] demonstrate that knowledge about large-scale mind control has simply been waiting on technologies powerful enough for full deployment. Examples from contemporary propaganda make clear that, by legal definitions, many individuals' ideas and choices are no longer their own. Instead, they have been adopted as a part of their identity under conditions of undue influence.

The implications for governments appear dire, especially for those seeking to maintain democracy. For centuries, governments have been the primary locus of social control within their borders. Law, education, propaganda, and economic regulation allowed governments to shape the behavior of their citizens, ideally in the interest of the public good. This is no longer the case. Information technologies now run interference between citizens and their governments, creating a situation where “behavioral entrainment” to the social network is stronger than the ideals and laws of the nation. As recent years have made clear, the dynamics of voter behavior, political protests, and public health campaigns have been profoundly impacted by social media. The full extent of this problem is only just becoming apparent.[6]

Regulation and reform are necessary but far from straightforward. Many of the people and groups tasked with understanding and regulating these technologies are themselves subject to the effects of the technology. It is one thing to regulate an industry creating environmental destruction thousands of miles away. It is another to regulate an industry where the dangers in question are unfolding within your own mind. The externalities of many technologies result in environmental pollution and eventually obvious harms to public health. The externalities of existing social media technologies include widespread psychological dysfunction and political polarization. Politicians, their constituents, journalists, and scientists, are themselves often heavy users of social media, and are being driven into polarization and irrationality as a result. No user of social media technologies is exempt from these effects.

The first steps to address this challenge are educational. Society must begin to understand just how coercive our informational environments have become. Understanding the dynamics of how undue influence works is the first step in regaining freedom of mind. Once this step has been taken, the next is to work on creating forms of social media that serve the public interest in education, civil society, and fair markets.

When Communication Technologies Become Mind Control Technologies

It can be hard to convince people of the effectiveness of propaganda as a means for impacting the minds and behavior of whole populations. As proof of concept, some scholars have pointed to events in 1938, when Orson Welles read the H.G. Wells extraterrestrial invasion story, The War of the Worlds, aloud on the radio as if it was happening. Even though it was said multiple times during the broadcast that the story was fiction, it caused a panic across New Jersey. It turned out that compelling narration rendered through (what was then) a “high tech” radio broadcast system, was just too much for the minds of listeners. The same exercise was repeated a year later in Ecuador—a celebrity personality reading The War of the Worlds aloud (in Spanish)—only this time, after the panic, when the people learned what had happened, a riot broke out, resulting in the burning of a radio station and twenty-one deaths.[7]

The persuasive and mesmerizing power of radio was noted by observers of the technology’s emergence and widespread adoption.[8] The use of radio during war mobilization efforts had made broadcasts a source of authority, lending the medium a symbolic power as the mouthpiece of science and government. Radio, and later television, would become definitive factors in how reality was understood by everyday citizens. This was in part due to the use of these technologies by governments to control their populations both during and after the major wars of the 20th century. Centralized propaganda distributed via broadcast communications technologies was the only show in town for decades.

Television’s entrancing power was also well documented in the 20th century, and attempts were made to use it explicitly for large-scale mind control. In October 1989, with the Berlin Wall about to fall and the Cold War ending, a series of unusual TV broadcasts issued out from Moscow.[9] Mass hypnosis was being attempted by the world-famous physician and psychotherapist, Anatoly Mikhailovich Kashpirovsky. The agencies responsible for the broadcasts believed that television could enable the use of mind control techniques on the entire population at once. The goal was to quell the rising tide of political unrest within the Soviet Union. By all accounts the effort failed. But this seemingly absurd attempt at mass mind control reveals the ambitions held by most major nation states during the era of television’s dominance.

Screen-based mind control and brainwashing was researched intensively by both sides during the Cold War. Results reached the public mostly through advances in advertising. The Stanley Kubrick film A Clockwork Orange captured the worst possibilities for the public imagination. The protagonist, a villain, is forcibly brainwashed, involving techniques of prolonged exposure to images on a screen, along with musical accompaniment and psychotropic drugs.

Research has shed light on the neurological effects of watching film.[10] At a basic level, prior to reflective awareness, the brain only minimally distinguishes between what occurs on screen and what occurs in real life. This is why films are so engaging and why young and old alike can be deeply moved and captivated by what they see on screens: neurological suspension of disbelief. Images on screens can go right to the heart of the brainstem, as if what is being seen is really happening.

Memories created when watching events unfold on a screen recruit nearly identical neuronal circuits as memories created when watching events unfold in real life.[11] This creates a kind of confusion at the very source of memory creation and retrieval. False memories composed of snippets of things seen on screens are very common. This phenomenon is not something that humans would have had to manage as a part of their lived experience prior to film and TV. The consequences of this profound change at the heart of human memory (and imagination) are only just beginning to be understood. Furthermore, sensory data presented on screen is typically beyond normal perceptual capacities (including, for example, multiple camera angles on the same scene, special effects, rapid cutting of shots, etc.). The result is a blending of unrealistic and hyper-realistic screen-based experiences and memories into the rest of our psychological lives.

…the ambitions of the public relations and propaganda specialists were always limited by the technologies at their disposal.

These insights were not lost on propagandists and advertisers, who made a science of using moving images to capture attention and influence choices.[12] The use of films in recruitment and training for militaries, cults, terrorist cells, and gangs has been widely documented.[13] Moving images on screens can be used in ways that are psychologically coercive.

But the ambitions of the public relations and propaganda specialists were always limited by the technologies at their disposal. As much as Americans built their lives, and even their homes, around the television, there was only so much psychological and social pressure that could be exerted through the medium as it was commonly used.

Experiences of TV were limited by time and space.[14] Programming was scheduled weeks in advance and viewing options were limited to programmed content. It was only possible to watch what was available, leaving viewers with limited options, and forced into the rhythms of the scheduling. Viewing opportunities were also limited by one’s physical proximity to a TV set. Even as TV use climaxed, and a set was placed in every room in the house, still no one had a TV on their person at all times. It was easy to be somewhere without TV, and therefore easy to disengage from behavioral entrainment.

During the heyday of TV, broadcasters and advertisers were unable to monitor individual reactions to their content. Consumer polling and focus groups were undertaken to form some general ideas about audience preferences, but these results were vague.[15] Demographic categories were quite abstract. What appeared on the TV screen could not be customized and individualized. Millions of people viewed the same thing at the same time in the same sequence. Only a few specific groups could be targeted, with the result that viewers outside those groups felt unaddressed by what they were watching on screen. As a function of its basic design, the technology could only “capture” so many minds.

Television was never a public social network for communicating between neighbors and friends. A television only receives broadcasts; it is unable to send them. Viewers are invisible to other viewers. Reactions to programming were largely private. This means that while television allowed advertisers and broadcast media to directly influence viewers, it did not allow viewers to directly influence other viewers. Therefore, the degree and intensity of social pressure was limited to one-way communication from “on high.” The technology did not allow for users to experience direct social pressure from their neighbors and friends, let alone fellow citizens in cities far away.

Digital technologies changed all of this, and thereby profoundly expanded the reach, impact, and effectiveness of propaganda and other forms of coercive communication. Society is now dominated by machines that do what TV sets never could. Today, screens are always with you. They surveille you, address you individually, and subject you to countless public social pressures.[16]

The personal computer and then the smartphone, first pioneered with Apple’s iPhone, normalized 24/7 access to computation. The data on use of computer screens is striking, especially when compared to statistics for TV during its peak. Some data suggests that teenagers are almost never not on a screen of some kind, even when socializing in real time with friends.[17] Screens are now regularly accessed in contexts previously reserved for a state of focus or relaxation, such as in the car, the park, and the forest. In some places you are mandated to have your smartphone with you as a means of showing identification, making purchases, and disclosing biomedical information. Citizens now must create designated places and undertake special personal disciplines in order to be away from their screens. Indeed, the effects of being without a screen are now being researched.[18]

Personal and handheld computers surveille your movement through physical space and the informational environment.[19] Data about citizens has gone from demographic categories to microtargeted psychographics. Advertisements and other content are delivered based on where you are, what you look at on screen, what you type in your emails, and more. A customized psychological profile is used to deliver content in a sequence that is unique to you and that addresses you directly: your desires, your fears, your hopes. Algorithmically-customized curation magically positions just the right thing in front of your eyes and ears. It is possible to do real-time adjustments to behavior, allowing for various kinds of conditioning protocols. The aim is always to keep your eyes on site longer than intended, often in a trance-like state. Your attention is captured by design.[20]

Where TV allowed us to look vicariously upon a stage of heroes and villains, social media thrusts us upon that stage to be cast in a role, or to assume one willingly. The pressure of conformity in online environments is strong.

Most profoundly, digital information environments allow people to be placed within virtual communities, made up of people known and unknown, who can address each other directly. Unlike TVs, networked computers enable the creation of screen-mediated public places, in which images, sounds, and texts created by anyone can be perceived by almost everyone. At any time, someone’s personal images, texts, and sounds can be brought onto the stage for others to perceive. This allows powerful social and psychological pressures to be applied.

Threats to someone’s “online self”—such as occur in “cyber bullying”—can cause real psychological damage, disenfranchisement, and even death.[21] Where TV allowed us to look vicariously upon a stage of heroes and villains, social media thrusts us upon that stage to be cast in a role, or to assume one willingly. The pressure of conformity in online environments is strong. Even those who “misbehave” and act like “trolls” conform to the norms of specific subcultures. The virality of dangerous trends among teenagers is driven by a deep psychological need to signal social status through appearances on the screens of peers.[22] The same dynamic occurs among adults, as Facebook showed with regards to voting behavior.[23]

Behavioral science, data science, and artificial intelligence are applied continuously in the design of technologies intended to “hack” the brains and minds of the individuals who use them.

These differences between TV and smartphones demonstrate that new dynamics of large-scale psychological influence are now possible. Behavioral science, data science, and artificial intelligence are applied continuously in the design of technologies intended to “hack” the brains and minds of the individuals who use them. Billions of dollars in research and development have been put into building information weaponry, to be directed against a single human brain, often a young one. Within this novel information environment, the possibility emerges for widespread use of undue influence.

Brain app icon with 19 million notifications
©Adaptive Cultures

The Dynamics of Undue Influence

The reasoning that led a government to deploy hypnotic television broadcasts (only 32 years ago) was based upon evidence accumulating that demonstrated effectiveness of mind control techniques. Psychological vulnerabilities and the practices of coercion that exploit them have been known for decades. Likewise, there has long been concern about what individuals do and say when subjected to undue influence.

Scientific work on the topic began after the Second World War, as psychologists began to research the topic in the wake of what appeared to be successful cases of overt brainwashing. Robert J. Lifton pioneered the field while working as an Air Force psychiatrist.[24] Lifton uncovered deep structural properties of particular social contexts that enable undue influence. In analysis since, much of this work has been brought together in research and advocacy directed at advancing the legal specification of undue influence.

Behavior control involves primarily the control of attention.

Contexts of communication and interaction that allow for the exercise of undue influence are often thought of as having cultic discourse patterns.[26] Research in forensic psychology on cults has revealed how authoritarian control is exercised. It involves the practice of asymmetric techniques of few-to-many communications and control. The result is a context in which individuals are subject to coordinated control of behavior, information, thought, and emotion.[27]

Behavior control involves primarily the control of attention. Other aspects of personal life are also involved, such as sleep and diet. Information is controlled by surveillance, and there is tight censorship of what can be read, watched, heard and discussed. Ideally, for maximal coercive effect, those working to control behavior know enough about their targets to arrange for just the right routine to expose vulnerabilities that will make them susceptible to personality and ideological capture.

For most of human history, this has been accomplished only in contexts like communal living that allow physical controls—such as training centers or houses. However, social media enables behavior control without the challenges of physical coercion and proximity. The constant presence of smartphones and their accepted role in monitoring both movement and information allow communities to be tightly coupled behavioral and cultural units, despite not being in spatial proximity. This dynamic can be seen clearly in any number of online subcultures.

Thinking can be controlled through the use of psychocultural devices, such as “thought-terminating clichés.” Robert J. Lifton discovered this tactic to be essential to the dynamics of oppressive and coercive groupthink in Maoist brainwashing re-education camps.[28] Phrases that can be used to stop further questioning of ideas are popularized and repeated frequently in conversation. Phrases that create out-groups and denigrated subgroups are used this way to great effect. Many of these are phrases that have emotional punch but make little logical sense. “The most far-reaching and complex of human problems are compressed into brief, highly reductive, definitive-sounding phrases, easily memorized and easily repeated.”[29] Social approval gained by repeating aloud the popular phrase stops thought and questioning in its tracks.

Lifton reported that in Maoist “re-education” camps, prisoners would sit in circles and engage in supervised “debates” for hours, usually in a state of sleep deprivation and hunger. In these discussions, any dissenting opinion could be dismissed as “bourgeoisie thinking” or “imperialist ideology,” while the main tenets of the accepted philosophy would be validated without explanation as “liberating ideas,” or “scientific.” Prisoners were forced to engage for long periods of time in what Lifton called “the language of nonthought,” wherein certain phrases had a magical power both to end complex debate and shame the loser.[30]

Emotional manipulation was exercised in these contexts through long durations of sensory overwhelm (high intensity “debates” for hours on end). Using these techniques, the more powerful and strategic party can easily manipulate feelings of guilt, fear, and confusion within overwhelmed and vulnerable targets. This highly emotional context allows for a discourse that is riddled with conceptual and behavioral double binds by design. There is no successful resolution to the trains of thought and courses of action suggested by the languages of nonthought, as they all contain elements of the impossible, illogical, and contradictory. This induces a generalized state of distress, which results in a shutdown of personal agency, and in turn allows all other dynamics of coercion to play out with less resistance.

This kind of treatment takes its toll on the identities and minds of those subjected to it. After months of this kind of treatment many prisoners would begin to seek the social approval and rewards (such as sleep and food) that were offered when they enforced and innovated in the use of specific thought-terminating clichés. After many more months of repeating thought-terminating clichés with increasing conviction, many prisoners appeared to forget which of their ideas—if any—were their own. POWs who returned home after their internment in Chinese camps during the Korean War faced difficulties of identity and cultural alienation. Some even chose not to come home, choosing instead to stay in North Korea.[31] Margaret Singer’s research on these Korean War POWs would confirm Lifton’s findings as to the effectiveness of “thought reform” techniques.[32]

Social Media Enables Undue Influence

The hallmarks of “brainwashing” and “thought reform” described above occur with remarkable power on social media platforms, usually without any centralized authority or plan.[34] Bots and trolls work in large numbers on both sides of polarized political divides, propagating thought-terminating clichés in key areas of the information environment. The same clichés draw individuals into senseless conflicts, all articulated in “languages of nonthought.” At the same time, ad campaigns “haunt” you across various platforms, as microtargeting algorithms track your browsing to deliver images and text in maximally distracting ways. Manipulative communication is the norm on many platforms, often to the point of being almost “creepy.”[35]

These digital environments have been designed to capture your attention, deploy surveillance, and then deliver your brain over to stronger, more knowledgeable, and overtly strategic parties for microtargeting.

The human mind now exists within a totalizing surround of digital technologies. Ubiquitous personalized computing has become the dominant medium, enabling economics and culture, and recruiting ever more human attention. One result is a historically unprecedented capture of personal and public communication by social media platforms. However, although these platforms are used as essential tools of identity formation and community building, they are not designed for such purposes. These digital environments have been designed to capture your attention, deploy surveillance, and then deliver your brain over to stronger, more knowledgeable, and overtly strategic parties for microtargeting. Whole populations are now dependent upon these technologies for the majority of their social interaction and connection, as well as valued communities and livelihoods. Billions of people are therefore vulnerable to undue influence.

Heavy social media use can result in a single individual being subject to undue influence from multiple competing parties at the same time. Elsewhere we have explored the implications of the resulting “limbic overload,” when personality systems become strained and incoherent as a result of multiple (and often contradictory) propaganda campaigns.[36] Heavy use can also lead to “algorithmic radicalization,” in which an individual is brought deep into a single stream of coercive engagement. This usually occurs through private chats, Facebook groups, and YouTube “rabbit holes” in which more and more of the same propaganda is algorithmically curated, while any countervailing information is excluded from view—all for the sake of maximizing user engagement in order to maximize advertising revenue.

Thought-terminating clichés have been pushed through print and TV—in fact, the same fundamental approach formed the basis of much of traditional advertising. However, undue influence requires participation in a social ritual of using thought terminating clichés with others. The social pressure to say the right thing and conform to discourse patterns online is enabled by an incentive landscape based on “likes” and “views.” Digital public spaces allow not only for the broadcast of thought-terminating clichés, but also for the binding of people into contexts in which they must gain social status through their use. Experiencing repeated social pressure and conversational closures based on the use of thought-terminating clichés creates psychological dissonance, and eventually numbing and downgraded cognitive capacity overall.

Sensory overwhelm and hypnotic trance are design features of platforms built for attention capture and digital advertising. This is the dazed and blank stare of those engaging in the so-called “infinite scroll”—overwhelmed with information and yet seeking more. Algorithms curate content that is likely to keep you engaged, based on your prior habits of engagement. Ads then appear “magically,” often showing you something right when “you were just talking about that!” You (not a demographic category: but exactly you) are being carefully monitored by forms of artificial intelligence that took billions of dollars and the most highly trained technical developers in history to create. The goal is to sell your information to people who would like to exert undue influence over your behaviors, and then give them the tools to do so.

This asymmetry means that it is not obvious how best to explain what is happening when individuals take to the street (or don’t) in protest, or when they are storming the nation’s Capitol in D.C. or the federal courthouse and certain police stations in Portland, Oregon. Consider the time spent looking at customized feeds of emotionally manipulative images, sounds, and text, which are microtargeting fear and outrage responses. Multiply the effect by the psychological and social pressure of digital communities. It is within screen-mediated public spaces that identities are formed, and commitments made, yet here thought-terminating clichés have the power to destroy reasoning, and trolls and bots have the power to reinforce the messages and burn any bridges back to a place of shared values and sensemaking.

Are the technologists who created the machines that enable undue influence culpable in any way?

Few protesters would say they were coerced into action; most would say that they participated of their own free will. But overall, taken in aggregate, the data science and psychology would suggest they were statistically likely to do so based on their profile and history of screen use. In truth, they were propagandized, or “nudged,” by groups using persuasive technologies. As a result, they took actions that broke the law. Should they be held accountable if they were subject to undue influence, or should those exerting undue influence over them be held to account? Are the technologists who created the machines that enable undue influence culpable in any way?

The control of behavior, information, thought, and emotion has been the stated aim of those investing billions in the design of “persuasive technologies.” Many understand these businesses as some of the most “successful” in history. However, these companies—such as Facebook and Alphabet—are unique in history, being different in kind from prior corporate giants in fossil fuel, agriculture, pharmaceuticals, or defense. Social media technology companies have many billions of users and claim trillion-dollar valuations, which are significantly larger than the populations and budgets of major governments. They have a monopoly-like control over AI-mediated personalized behavior modification systems—touching the lives of billions of people—which they lease out for profit to governments and businesses. The technical, scientific, and economic successes of these organizations have resulted in the widespread use of machines capable of exerting undue influence.

“Nudging” Can be a Form of Undue Influence

To be clear, the argument here is that there are structural identities between normal screen use today and contexts that psychologists have demonstrated to involve undue influence. The claim is not that the situation is like undue influence—as if social media exerts undue influence. Under certain conditions, these technologies do exert undue influence. We are holding mind control devices in our hands. There is a risk in using them, as noted above, of being subject both to limbic overwhelm and algorithmic radicalization. This is the result of high levels of exposure to fundamentally coercive communication environments.

The power of these technologies should come as no surprise. Government and big tech have spent years researching the use of psychological and data sciences to improve outcomes for citizens via behavior-change techniques. Mostly, these activities have been running under the name of “nudging.”[37] For more than a decade, there has been major funding for public-private partnerships that advance new forms of social control and persuasion that avoid education and instead get straight at behavioral change. That is the nudge: it isn’t propaganda hitting you over the head, but subtle changes in choice architecture and information flow that direct your actions and attention down desired avenues, ideally without awareness that nudging is happening.[38]

These efforts were initially rolled out through Silicon Valley, Stanford, Harvard, and the Obama administration. They crossed the Atlantic, resulting in a UK “Nudge Unit,” which has branches offering consulting all over the world.[39] In 2020, during the pandemic, the Nudge Unit (now known as the Behavioural Insights Team) and related organizations emerged as key strategic assets in the fight against the virus. Communications and policies were designed and executed with explicit awareness of the intended behavior change that organizations hoped to elicit. Nudging—not education—was the explicitly stated goal of the communications teams working on public health campaigns. This was not a unique shift in strategy, as in reality nudging became the goal of most government PR almost a decade ago.[40]

Of course, some applications of nudging are not problematically coercive. These include examples of nudging frequently presented as justifications of its use.[41] For example, there are safety labels on cigarette packs encouraging people not to smoke by accurately describing the dangers of smoking. These have been shown to work. It has been argued that nudging should be free from profit-seeking and other incentives that ought not be mixed with powerful behavioral change technologies. Rather than bypassing learning, nudging can direct people towards education and therefore open doors to truly informed choice-making. Nudging can simplify choices while still presenting an accurate and comprehensive choice landscape. It need not be paternalistic and coercive by limiting options and information to a degree that actual free choice is imperceptibly removed.

However, many nudges are ethically suspect because they involve techniques that exert undue influence. These would include techniques that involve covert behavior monitoring, microtargeted messages, and coercive choice architectures. In these scenarios individuals are presented with a forced choice—they are surrounded by information that leaves only one legitimate outcome. These kinds of nudges are the very stuff of digital campaign politics on social media, where users are surveilled and microtargeted with ads that address their specific emotions, such as their specific fears and aesthetic preferences. With only two candidates to choose from, there is no other logical choice but to pick the one that emotionally resonates.

Of course, emotion and education go hand in hand. Some messages will always come with intense emotion. The warnings on cigarette labels should make people fear lung cancer from smoking. Indeed, the best of what is possible with nudging—and what many in the field aim to do—is to create contexts that provide easy access to important information and the appropriate related emotions. However, nudging becomes coercive when the behavior change resulting from the emotional impact of information becomes more important than the accuracy of that information. This is emotional manipulation rather than education, with the goal of behavioral change, irrespective of damage to the truth.

Emotion is an important aspect of government and business communication strategies and nudging campaigns. However, fear in particular has been the focus of regulatory agencies seeking to ban advertising that mobilizes fear to sell products and political candidates. This is in part because of the demonstrated power of fear to create conditions of undue influence.[42] Regardless, fear is a central feature on social media and the news, where it has been selling papers and capturing attention since the printing press first allowed mass-produced periodicals to be circulated in large urban centers.

Using emotional manipulation instead of education to control behavior is a sure sign that undue influence is being exerted. Nudging should educate, not manipulate.

Using fear or any strong emotion to manipulate behavior strategically is a problem. This approach to behavior control will distort people’s appraisals of their own emotions and distort their ability to judge what is desirable and what is not. Using emotional manipulation instead of education to control behavior is a sure sign that undue influence is being exerted. Nudging should educate, not manipulate. Today the design of choice architectures and algorithmic content curation make it hard to look away. Strong emotion sells. It moves people. Nudging through the manipulation of emotion can appear justified in a time of perceived emergency. But do the ends justify the means?

single image of larger brain swarmed by dozens of notification buttons

Are There Humane Futures for Persuasive Technologies?

In the first paper of this series, we argued that humanity is in an escalating information war, involving information weapons of mass destruction, leading toward mutually assured destruction.[43] The second and third papers suggested that the same tools used to create digital propaganda could be refitted to allow for educational initiatives of unprecedented scale and effectiveness.[44] The third paper added evidence that traditional propaganda campaigns and public relations approaches are starting to backfire. Classic centralized propaganda is failing as digital technologies lower barriers to entry for participation in the information war. The tools of undue influence are in the hands of far too many players for traditional forms of propaganda to continue to work. Both our minds and our communities are being rendered incoherent as a result of a constant barrage of limbic hijacking, attention capture, and polarizing algorithmic curation. All three papers suggested that there is an urgent need for solutions in the domain of education, including innovations in digital civic technologies.

We see now in this final paper of the series that the sense of urgency is not being exaggerated for effect. Newly ubiquitous technologies have normalized coercive communication. They are making undue influence an aspect of everyday life for many people. The legal implications of being subject to undue influence are far reaching, as the situation threatens the sovereignty of individual choice and agency. When individual choice is compromised by the strategic and manipulative action of a third party, a cascade of ethical and political implications ensues. If in fact voters and consumers are not in a position to be held accountable for their choices, what do we make of claims about democracies, free markets, and open societies?

Coercive communication is not just part of the environment, as it was in the past. Coercive communication now constitutes the environment itself.

Historically, propaganda has often existed alongside a free-thinking population and robust educational institutions. A threshold has been crossed with the emergence of microtargeted computational propaganda delivered through social media. Social media contexts are fundamentally different from broadcast technologies because they function as a public space and therefore afford powerful psychological and social pressures. Coercive communication is not just part of the environment, as it was in the past. Coercive communication now constitutes the environment itself.

Without intervention, these technologies will continue to destroy our minds and communities. Their power to sway our psychology is already undermining the legitimacy of voting as an aspect of government. Their algorithms capture economic choice dynamics, directing consumer behavior, as social media companies stand in as a new “invisible hand” shaping the market. These technologies present a clear and present danger. What can be done?

The first step is to become aware of the dynamics of undue influence. This is needed before regulation, legal action, and design initiatives, because it is our own minds and communities that have been impacted. The information environment has put us in the position of not being able to trust our own minds, and often even less the judgments of our fellow citizens. There is no easy way for people to uncouple from undue influence dynamics and reestablish control over their own minds and behaviors. However, there are some steps that can be taken at various levels, which might help to slow the rising tide of undue influence. At the same time, we can begin to incentivize innovations in social media technologies that are more conducive to the functioning of open societies.

The next critical step involves an increase in public oversight and awareness of the methods, risks, and impacts of attention capture business models. In particular, it is important for citizens to understand the implications of microtargeted advertisements that are coupled to algorithmic curation. Open the psychometric and surveillance software to users and allow them to see and understand their own personal data. Then begin to find ways to ensure that social media technologies decouple the incentive to make profits from the ability to hijack cognition and behavior. Sane and humane digital environments can be created if the industrial incentives can be aligned with humanitarian ends. Psychological and data sciences can be used to promote learning, rather than to nudge the masses into behavioral conformity, or overwhelm their attention to the point of incapacitation.

There is a way forward for open societies in the digital age. It begins with understanding and then binding the power of social media technologies to exert undue influence. The same technologies that brainwash us now could provide for a kind of education more powerful than any modern school system. The tools of algorithmic curation used to capture our attention to deliver advertisements could be used to promote individualized learning and to protect our attention from being degraded. This is entirely possible. Schools, communities, governments, and markets can be reimagined based on the use of social media, but this requires rethinking both social media’s purpose and beneficiaries. There is a way forward for open societies in the digital age, but it is not our default path.

Footnotes

  1. Joel Dimsdale, Dark Persuasion: A History of Brainwashing from Pavlov to Social Media (Yale University Press, 2021).

  2. Jonathan Rowson, “Transforming Behaviour Change: Beyond Nudge and Neuromania,” The RSA (report), November 2011, https://www.thersa.org/reports/transforming-behaviour-change.

  3. Shoshana Zuboff, The Age of Surveillance Capitalism (New York: Public Affairs, 2019).

  4. S.A. Hassan and M.J. Shah, ”The Anatomy of Undue Influence Used by Terrorist Cults and Traffickers to Induce Helplessness and Trauma, So Creating False Identities,” Ethics, Medicine and Public Health 8, (2019): 97-107. https://doi.org/10.1016/j.jemep.2019.03.002.

  5. See Consilience Paper’s Propaganda Series. “It's a Mad Information War” , “We Don’t Make Propaganda! They do!", and “The End of Propaganda.”

  6. See “The Facebook Files,” as reported in the Wall Street Journal, October 2021.

  7. Anthony Pratkanis and Elliot Aronson, Age of Propaganda: The Everyday Use and Abuse of Persuasion (Holt: New York, 2001).

  8. Serge Chakotin, The Rape of the Masses: The Psychology of Totalitarian Political Propaganda (London: Routledge, 1940 [2017]).

  9. Wladimir Velminski, Homo Sovieticus: Brain Waves, Mind Control, and Telepathic Destiny (Cambridge: MIT Press, 2017).

  10. Jeffery M. Zacks, Flicker: Your Brain on Movies (New York, Oxford University Press, 2015).

  11. Ibid.

  12. Pratkanis and Aronson, Age of Propaganda, see note 7 above.

  13. Hassan and Shah, "The Anatomy of Undue Influence," see note 4 above.

  14. Roger Silverstone, Television and Everyday Life (New York: Routledge, 1994).

  15. See the classic work on TV-era persuasion: Vance Packard, The Hidden Persuaders (1957).

  16. Nicholas Carr, Utopia is Creepy and Other Provocations (New York: W.W. Norton, 2016).

  17. Pavica Sheldon, James M. Honeycutt, and Philipp A. Rauschnabel, The Dark Side of Social Media: Psychological, Managerial, and Societal Perspectives (London: Academic Press, 2019).

  18. Yalda T. Uhls, Minas Michikyan, Jordan Morris, Debra Garcia, Gary W. Small, Eleni Zgourou, Patricia M. Greenfield, “Five Days at Outdoor Education Camp without Screens Improves Preteen Skills with Nonverbal Emotion Cues,” Computers in Human Behavior 39 (2014): 387-92, https://www.sciencedirect.com/science/article/pii/S0747563214003227.

  19. Shoshana Zuboff, The Age of Surveillance Capitalism, see note 3 above.

  20. See the work of the Center for Humane Technology, and the documentary, The Social Dilemma, directed by Jeff Orlowski (2020).

  21. Sheldon, Honeycutt, and Rauschnabel, The Dark Side of Social Media, see note 17 above.

  22. Julian Mark, “A TikTok trend inspired students to steal toilets. Now, school officials say they’re slapping teachers,” The Washington Post, October 6, 2021, https://www.washingtonpost.com/nation/2021/10/06/tiktok-slap-teacher-challenge/.

  23. Robert M. Bond, et al., “A 61-million-Person Experiment in Social Influence and Political Mobilization,” Nature 489, no. 7415 (2012): 295-98.

  24. Robert J Lifton, Thought Reform and The Psychology of Totalism: A Study of ‘Brainwashing’ in China (New York: W.W. Norton, 1961).

  25. Ibid; This articulation is based on the one found in: Steven Hassan, Freedom of Mind: Helping Loved Ones Leave Controlling People, Cults, and Beliefs (Newton MA: Freedom of Mind Press, 2013).

  26. Steven Hassan, Combatting Cult Mind Control (Newton MA: Freedom of Mind Press, 2018, fourth edition).

  27. Ibid: The “BITE” model of authoritarian groups—exercising control of behavior, information, thought, and emotion.

  28. Robert J Lifton, Thought Reform and the Psychology of Totalism, see note 24 above.

  29. Ibid: 429

  30. Ibid. Here Lifton is actually referencing Lionel Trilling’s phrase, "the language of nonthought."

  31. Joel Dimsdale, Dark Persuasion: A History of Brainwashing from Pavlov to Social Media (Yale University Press, 2021).

  32. Margaret Singer, Cults in our Midst (New York: Jossey Bass, 1996).

  33. Ibid. This box is based on Steven Hassan, Freedom of Mind.

  34. “Horizontal propaganda” is defined in the Consilience Papers, “We don’t Make Propaganda, They do!” see note 5 above.

  35. Nicholas Carr, Utopia is Creepy, see note 16 above.

  36. The Consilience Papers, “The End of Propaganda,” see note 5 above.

  37. David Halpern, Inside the Nudge Unit (London: WH Allen, Penguin, 2015).

  38. Choice architecture concerns “the design of different ways in which choices can be presented to consumers.”

  39. “Nudge Unit,” Institute for Government, https://www.instituteforgovernment.org.uk/explainers/nudge-unit, updated March 11, 2021. See also the work of the Behavioural Insights Team, https://www.bi.team/.

  40. David Halpern, Dominic King, Ivo Vlaev, and Michael Hallsworth, “Mindspace: Influencing Behaviour through Public Policy,” March 2, 2010, https://www.instituteforgovernment.org.uk/publications/mindspace. Elsewhere we document the failures of the continued use of centralized “vertical” propaganda as a means of social control. See Consilience Papers, “The End of Propaganda,”

  41. Cass Sunstein, The Ethics of Influence: Government in the Age of Behavioral Science (New York: Cambridge University Press, 2016).

  42. “Harm and Offence,” UK Code of Non-broadcast Advertising and Direct and Promotional Marketing, https://www.asa.org.uk/type/non_broadcast/code_section/04.html. Accessed December 1, 2021.

  43. Consilience Papers, “It's a Mad Information War” See note 5 above.

  44. Consilience Paper, “We Don’t Make Propaganda! They do!” and “The End of Propaganda.” See note 5 above.