When it comes to countering information operations online, the most frequent approach so far has been to target “coordinated inauthentic behavior.” This gives platforms the opportunity not to focus on content, but instead on the deceptive manner in which bad actors pretend to be someone other than who they really are. Additionally, not focusing on content itself allows companies to dismiss common criticisms that they act as a “Ministry of Truth” evaluating which information is true or false.
This approach does have clear weaknesses though, as recently highlighted by a study from the University of Washington.
The authors write about “strategic information operations (e.g. disinformation, political propaganda, and other forms of online manipulation),” which they describe as “efforts by individuals and groups, including state and non-state actors, to manipulate public opinion and change how people perceive events in the world by intentionally altering the information environment.”
The study has three case studies showing various examples of such strategic information operations: operations by the Saint Petersburg troll factory (the Internet Research Agency, or IRA) targeting the US election campaign in 2015-16, the disinformation campaign discrediting the humanitarian White Helmets in Syria, and the online ecosystem supporting conspiracy theories about crisis events, such as terror attacks.
In those three case studies, the authors find differing levels of actual coordination among the individual actors involved in their respective operations. While IRA actions targeting political discourse the United States were highly coordinated, “in other cases, the collaboration consists of convergent behaviors that reflect a kind of intentional shaping or cultivation of an online community.” The operations are trying to find unwitting agents and citizen marketers who are spreading the desired messages.
Experts are already familiar with this type of approach, and it remains a hallmark of Kremlin disinformation tactics. A longstanding strategy with numerous recent examples, the Kremlin has actively tried to find other actors who would spread their disinformation campaigns for them —either to completely hide the source of the original disinformation, or at least to strengthen the effect of the campaign by recruiting another voice that will amplify the spread and give it increased legitimacy.
In the past year alone, these types of cases have been documented in Ukraine and in Germany where the Russian government is actively pursuing this strategy and reaching out to external partners to gain new channels for spreading their disinformation. Other, similar cases were documented in multiple other European countries, where the active role of the Kremlin was not yet proved, though the campaign certainly benefited Moscow.
Even the other two case studies, which had a less prominent connection to Kremlin-serving disinformation actors, describe tactics are still very much in line with the usual pro-Kremlin messaging. In recent years, the disinformation machine controlled by Moscow has spread aggressive disinformation about both the White Helmets, accusing them of various crimes, and about multiple terror attacks, spreading all kinds of conspiracy theories or persuading the audiences that these events never happened.
Focusing solely on the “coordinated inauthentic behavior” will not help us very much when we deal with those campaigns that have succeeded in penetrating audiences deeper. As Yuri Andropov, the former head of KGB, used to say, the role of disinformers is to plant the seeds of disinformation and water them day after day, until they bear fruit—but later, someone else will already start doing the legwork. “Eventually, American leftists would seize upon Ares and would start pursuing it of their own accord,” writes Ion Mihai Pacepa in his book Disinformation, describing the KGB’s disinformation operation aimed at weakening the US during the Vietnam War code-named after the Greek God of war Ares. “In the end, our original involvement would be forgotten and Ares would take a life on its own.”
Once a disinformation campaign works in a controlled environment long enough, there is a high probability it will penetrate into an uncontrolled environment. The level of coordination of the campaign will decrease, but its impact can even increase for reasons described above.
While the level of coordination changes, the goal of the campaign will remain the same. And it is the goal of the campaigns that would deserve much more attention from the social media platforms (and also from the policymakers who are dealing with this issue).
This is also a proposal that the authors from University of Washington cautiously suggest: “For example, a more robust approach might consider information operations at the level of a campaign and problematize content based on the strategic intent of that campaign.” Although the authors highlight that the approach could potentially be at odds with freedom of speech.
We must find strategies to move away from the purely formalistic approach focusing on coordination and inauthenticity and shift to an approach that also reflects the content and intention of a disinformation campaign. Tactics are ever-shifting, and what may have been an easily identifiable “inauthentic” amplifier may now be far more difficult to spot. As disinformation grows more complex, so too should anti-disinformation.
Sticking only to criteria regarding coordination and authenticity is too simple of a formula for anti-disinformation efforts—a formula that gives information aggressors how-to for avoiding counter-measures.