How to Fight Disinformation: Introduction and Overview

This is a series about how communities can fight back and protect themselves against weaponized disinformation.

Part I: Firehosing
Part II: Gaslighting
Part III: Distraction
Part IV: Signaling and Dog Whistling
Part V: Resilience Targeting

In 2015 and 2016, new and paranoid strains of political ideology took social media discourse by storm. The details varied by community, but all versions were toxic brews of fear, terror, and lies borne on a torrent of algorithms with the goal of lowering public morale and straining or breaking social bonds.

The algorithmically-charged disinformation and upsetting discussions around it were so overwhelming that it was at first difficult for communities and countries to recognize that they were under attack. In fact, the speed and scale at which misinformation and disinformation spread, generating ripple effects in politics and policies, were unprecedented acts of hybrid warfare, which tends to focus on information rather than traditional “kinetic,” or physical, battle spaces.

This is a tactic heavily used by Russia’s military, and is therefore often analyzed as though it has only been used by them — but although the Kremlin has known expertise in such techniques, they are by no means limited to any one country or entity thanks to the fact that social media has made such techniques extremely simple and relatively cheap to scale, turning the online world into a state-sponsored or corporate-backed free-for-all:

Hybrid action is characterized by ambiguity as hybrid actors blur the usual borders of international politics and operate in the interfaces between external and internal, legal and illegal, and peace and war. The ambiguity is created by combining conventional and unconventional means – disinformation and interference in political debate or elections, critical infrastructure disturbances or attacks, cyber operations, different forms of criminal activities and, finally, an asymmetric use of military means and warfare.

By using the aforementioned unconventional and conventional means in concert, hybrid actors veil their action in vagueness and ambiguity, complicating attribution and response. The use of different intermediaries – or proxy actors – supports the achievement of these goals. Hybrid action is cost-effective as it turns the vulnerabilities of the target into a direct strength for the hybrid actor. This makes hybrid action more difficult to prevent or respond to.

In an online universe defined by targeted disinformation, we become the unwitting (and sometimes witting) instruments of our own offline destruction — from our individual, compartmentalized lives all the way to up to the international communities. Now, more than any other time in human history, we can see how interconnected we all are — when one tweet may mean the difference between life or death, or stories circulated to just the right people can trigger grotesque acts of ultraviolence.

This is an issue that for the moment is intractable. Our behavioral and personality profiles won’t be going anywhere; our identities have already been gathered, cultured, sliced, diced, manipulated, reassembled into simulacra of ourselves for “modeling” purposes, and sold to the highest bidders. It won’t be coming back.

This means that if you use social media, and sometimes even if you don’t, there are entities out there with imprints of your personalities, that by their own admission they are trying to use to predict and manipulate your behavior in ways that were previously invisible but which are now becoming more revealed by the moment — you have been checked, tested, and scored according to decisions you make every day, and those decisions are used to try to “nudge” you into specific choices using techniques such as “dark patterns,” or visual and emotional manipulation that is not immediately obvious as such:

Dark patterns show up all over the web, nudging people to subscribe to newsletters, add items to their carts, or sign up for services. But, says says Colin Gray, a human-computer interaction researcher at Purdue University, they’re particularly insidious “when you’re deciding what privacy rights to give away, what data you’re willing to part with.” Gray has been studying dark patterns since 2015. He and his research team have identified five basic types: nagging, obstruction, sneaking, interface interference, and forced action. All of those show up in privacy controls. He and other researchers in the field have noticed the cognitive dissonance between Silicon Valley’s grand overtures toward privacy and the tools to modulate these choices, which remain filled with confusing language, manipulative design, and other features designed to leech more data.

Those privacy shell games aren’t limited to social media. They’ve become endemic to the web at large, especially in the wake of Europe’s General Data Protection Regulation. Since GDPR went into effect in 2018, websites have been required to ask people for consent to collect certain types of data. But some consent banners simply ask you to accept the privacy policies—with no option to say no. “Some research has suggested that upwards of 70 percent of consent banners in the EU have some kind of dark pattern embedded in them,” says Gray. “That’s problematic when you’re giving away substantial rights.”

This would be a matter of grave concern in any case, but because this technology has been developed opaquely and used against us without our knowledge or consent to bring about specific political goals, it threatens to usher in an age in which would-be authoritarian strongmen with the money and desire to subjugate can distract and upset entire populations long enough to seize power.

A few caveats are in order here. For one thing, this is a very blunt instrument. Social media, which has sold itself to corporations as a persuasion machine, does use sophisticated marketing techniques in order to sway personal decisions — but the tech world’s understanding of humanity is demonstrably so grossly oversimplified that instead of building a persuasion machine, it has built a permission mechanism that allows the worst among us to spread lies, corrosive rhetoric, and suffering, elevating their voices artificially above the rest using algorithms and misdirection.

This is also not intended to take the place of professional analysis nor help for individual psychological trauma. Rather, it is an attempt to distill the at-scale psychological effects of disinformation and propaganda into their corollaries at the individual level for better understanding of how it works — and how to deal with it. And while we are not qualified nor equipped to offer advice for psychological help, we can identify and compare these mechanisms.

And we can reverse-engineer these tactics. Cultivating radical compassion is one way to counteract the confusion and frustration from the emotional and psychological attacks that are part and parcel of hybrid threats. Reach out to your neighbors, establish mutual aid networks, be kind to one another, but do not tolerate intolerance in your networks and do not be afraid to take time to establish boundaries.

These strategies are part of what has come to be called building resilience, or forming a cultural immune system against disinformation toxicity. Building up institutions — particularly journalism — is essential, but the fight begins at the individual level, which effectively democratizes the response to warfare and threats. In other words, everyone who wants to can fight back:

Building societal resilience is the only assured way of keeping at least some of the home-​field advantage because the aggressor will try to build-​up and utilise the effect of surprise. This, however, is not an easy task. It requires a long term plan and dedication to implementation.

First, a strong political mandate and security concept need to be in place. Second, planning, awareness building, and education are needed. Third, the key stakeholders in various parts of the society must share a common situational awareness, threat and risk assessment, and planning and training processes.

Building a more resilient society should not be viewed only as an extra burden for already economically struggling Western societies; it is also a great opportunity. The structures that allow a society to respond in an agile manner to hybrid threats also support better understanding and coping with the complex underlying interrelations that make our modern societies fragile. These defensive structures also help to make our societies more functional, as decision-​making processes become more transparent and inclusive.

In this series, we will discuss the different forms of resilience and offer suggestions and solutions for dealing with information warfare and hybrid threats in your everyday life.

*