It is becoming harder to ignore the fact that many Western experts and a certain interested section of the public are spending disproportionate amounts of time speculating about the personal and social vulnerabilities associated with the spread of so-called computational propaganda—technogenic disinformation produced on an industrial scale and distributed openly by bots or other semi-automatic methods.
This global attention shift may be partly explained by both the constant rise in the number of social network users and the increasing time we spend on them. Their complex algorithms, which mostly work by drawing on human neuropsychological processes and social psychology, grab the daily and hourly attention of an ever-increasing number of people. It is also worth mentioning the impermeability of these algorithmic information flows, which can be controlled for mercenary purposes. Let us also recall that, not without the help of the internet giants, attention and trust has been turned from public currency into real capital for social networks.
Of course, having such a large-scale, convenient and relatively inexpensive tool to hand, an ill-wisher will certainly use it both to manipulate communications and for effective and targeted dissemination of hostile propaganda, the goals of which can range from creating information supersaturation (and, as a result, chaos) to exacerbating polarisation and provoking clashes between certain social groups. In any case, we should not forget that the end-user of such influence operations is still the information consumer: that is, a person whose actions or inaction may, for example, determine the results of elections or the general level of trust in public discussions, key opinion leaders or government agencies.
How and Why Do We Exacerbate the Problem?
By paying a great deal of attention (and thereby expending both material and intellectual resources) to the exclusive analysis of computational propaganda, we unwittingly forget about the core element in the cycle of production, distribution and consumption of information: the human being and his cognitive susceptibility. Having succumbed to the global trend on social networks of identifying intentional disinformation and falsehoods or simply the results of trivial unprofessional journalism, we have become obsessed with searching for networks of bots—especially, of course, when the searching is made relatively easy (for example, on openly accessible Twitter accounts). But overreliance on the superficial technological aspect of misinformation creates a strong base for our cognitive blindness. Investigating the technogenetics of this problem, its in-depth analysis and full exposure of the malicious propaganda campaigns on social networks is, of course, a very important and necessary task, but one that should nevertheless not be done by switching attention away from the object of influence to the channels of communication. Why do we forget the tried and tested rule of planning and running the so-called information war, “one should not exclude the other”? Is it because, when we face significant problems, we very often limit ourselves to their superficial description (in fact, this is the realm of mediocre “experts”, unscrupulous politicians and socially irresponsible journalists), or because we become engaged in academic theorisation, which does not offer realistic solutions but only increases the quantity of scientific waste paper?
This approach to the problem leads to many people often becoming confused over its meaning and starting to abuse the substitution of concepts. From here comes the lack of communication, which creates confusion. Confusion breeds mistakes, and our communicational self-engagement based on a false consensus does not allow us to see and admit these mistakes. Thus, distrust of particular people, professions and social groups and their expectations and actions is born. All this contributes to polarisation in society and provokes even greater fragmentation. The things described above are just symptoms, not the causative agents of an informational disease. But, as in our daily lives, we are inclined to prescribe only symptomatic treatment for ourselves instead of listening to specialists and strictly following the recipe for the long-term eradication of the root of the problem.
Abuse of our Trust
Since social networks tend to exploit our impulsiveness and do nothing to help us overcome it, we need to return to a key link in the cycle of production, distribution and consumption of information. Meanings (hostile or not) are produced by humans for humans. At the same time, disguising them in attractive stories (narratives) and spreading them through various information channels also mostly happens with the direct or indirect involvement of humans. Many studies show that, along with television and the internet, the inner circle—friends and family—remains for most people an important source of information both of a personal nature and concerning socio-economic and political topics, and such channels assume greater importance during a crisis. By the way, this is evidenced by the experience of Estonia in 2007 and Ukraine in 2014–5, when certain groups of people easily succumbed to various manipulations of information, based mainly on personal contacts or word of mouth. And if there are certain rules and responsibilities, as well as protection mechanisms (freedom of speech, non-disclosure of sources, etc.) for journalism, the so-called traditional media and even some online portals and social networks, how do you protect the channel of information you trust the most—your personal contacts—from hostile influence?
While the majority has switched to detecting fake news in the virtual world, malicious rumour, gossip and libel/slander continue to spread virally through human channels. One element of such popular rumours can be directed, for example, at discrediting a public figure, creating local panic about the devaluation of a currency, disrupting movement in a particular region or confusing the residents of border areas. At the same time, the other part has broader aims, such as reducing the perceived status of conscripted military service, undermining confidence in the police, reviving old meanings and symbols, updating linguistic or national stereotypes, creating new myths, kindling nostalgic sentiment, demanding peace at any cost, exaggerating corruption, denigrating the European Union and so on. Remember, one does not exclude the other. Various marginal “human rights activists”, pseudo-media, left-wing advocates of Russian-language education or right-wing “Soldiers of Odin” and other “anti-NATO pacifists” often join this chorus. Every country and society has its own unique combination of messages and a wide variety of reporting channels. Any communication between sections of a society, group or organisation is a potential target for informational aggression. Hostile-influence planners know for sure that the more varied the thematic range and the more colourful the palette of different versions, the wider the coverage of various target groups in a society, where there will always be someone dissatisfied with something—whether it be stray dogs, high fuel prices, illegal immigrants, deforestation, NATO soldiers, a new railway or sanctions against Russia.
Attacking Us and Our Ideas
While many experts have turned to exposing bots on social networks (but less often the puppeteers pulling the strings), the real agents of information influence continue to pursue their craft in the shadow of our communications illiteracy and cognitive blindness. We are not talking here about employees of unfriendly special services, but rather about those people who disseminate deliberately harmful information, both personally and through their network of contacts, among whom there are both willing distributors of ideas and paid agents. You can add to these the “useful idiots” and the declining standards of modern journalism—some of whose representatives can afford to be openly biased and use psychological terror through labelling and unsubstantiated accusations, while bearing no discernible responsibility for the harassment. The provocation of such supposedly spontaneous informational lynching is also part of successfully planned and implemented hostile aggression against certain ideas, meanings, trends and values that are important to society. Many of us can instantly think of examples of harassment by the press. So potential threats lie outside the media space, since an active person is always both a generator of fascinating ideas and a hotbed of useful memes; in other words, the main part of the aggression is always directed not at the reporting channel but at an individual, in order to discredit him, reduce his influence and restrict his capabilities. For people who order and design such information tools, it is important to stay unnoticed as long as possible, since their main task is to use forces and means that will not be noticed straightaway.
The information transmitted via personal contacts works in both the real world and the virtual, including social networks, where some of it nevertheless remains hidden from public discussion. It is more effective to transmit rumour, speculation, gossip and libel/slander either in person (even when communicating with strangers on public transport, in a shop, at a rally, or in church) or anonymously in online groups and using special messengers. All of this is done according to the rule that either an individual can influence a group’s opinion (by political leadership, appeals to national ideals, religious trends, expert opinions, etc.) or the group can influence a person’s opinion (by conformity, suggestibility or subordination). In understanding this, one must recognise that attempts to somehow influence public opinion (not to mention behaviour)—for example through Twitter—have a limited effect both geographically and cognitively. Toxic cues are spread by people through different channels, not only to create informational chaos, but also with a much more important goal: to create serious doubts about the effectiveness of the authorities, the transparency of social and political processes and the accuracy of official information, and to undermine trust between groups in society.
What is Alien to Us, and What is Ours?
Do not treat disinformation as simply a tool for tactical use. In fact, it is used just as actively to shape our collective memory—that is, a general perception of the past, on which, as we know, our vision of the future may depend. It is for this reason that the large-scale revival, refreshing and highlighting of myths about the USSR is used in pro-Kremlin information campaigns, the content of which is based on the Soviet-style language aesthetics and traditions of school days spent in the Soviet republics, which makes it possible to involve the target audience (aged 35 and over) in framework-setting communication. The transmission and consolidation of such messages through information flows is most effective at the personal level where, under the influence of the results of the work of propaganda interpreters and agent-distributors of ideas, “alien” issues appear in our everyday lives. These comprise:
- an alien agenda (for example, paranoid xenophobia like “All immigrants are criminals” or infantile paternalism such as “The government should create jobs, build factories and repair apartment blocks”)
- alien heroes (like Masha and the Bear and Russian warrior-liberators)
- alien contagious memes (“polite people”, “Don’t make my Iskander ballistic missiles laugh” and “We can do it again”)
- twisting foreign values (“The EU is evil and is destroying our sovereignty”, “Estonia must leave NATO” and “We have been friends with Russia for centuries, which is nearby, while America is far away”)
- alien events (23 February as Men’s Day, 9 May as Victory Day, torchlight processions with far-right slogans) and
- foreign traditions (New Year’s Eve meal—back to the USSR; wedding photos with a Soviet T-34 tank in the background).
The list of examples goes on, but all are created to be somehow introduced into the cognitive space, where they contribute to the substitution of concepts at a very simple value level: friend or foe, good or bad, close or distant.
Thus, it turns out that, for many local Russian-speakers in Estonia or Latvia, NATO soldiers in their country are invaders, but Russian troops in the Crimea are liberators; and the Estonian government (or the Latvian, or Ukrainian) is just a puppet of the United States while the local authorities in Abkhazia (or Transnistria) are fully fledged subjects in international law and equal partners of the Russian Federation. Similarly, the malicious European Union forces people into same-sex marriages throughout the Baltic states, while in spiritual Russia they care about preserving the sanctity of family values; and the Russian Orthodox Church is the most faithful and orthodox in the world, while the one in Ukraine is schismatic.
An alien narrative will always look even more attractive if there is no clear position or unifying vision of a shared future at home, if there is a split in society, or if your own stories no longer inspire you and don’t fire up your personal cognitive energy. Manipulative disinformation that promotes falsehoods falls on fertile soil, which is why it is easy for people to believe rumours about everyday Nazism in the Estonian army (a typical confirmation would be “A son of a friend served in the army and saw it for himself”); the “irreparable environmental damage” caused by a new railway (“I read about it on the internet”); black refugees arriving in the country (“They were talking about it on TV yesterday”); “American soldiers getting drunk every weekend, fighting with our boys and raping our girls” (“A friend told me, and she works in a hospital”); or the theft of NATO grants through corrupt schemes (“They have always been thieves—they were even caught stealing as schoolchildren”). Such cock-and-bull stories are usually accompanied by subjective negative judgements about the main individuals, experts and opinion leaders representing and defending the opposite position or point of view—that is, personalised attacks labelling someone, undermining trust and damaging their reputation. This distribution can be amplified both by the statements of those agents of influence and by the sort of informational lynching mentioned above. It is always easier and more effective to launch accusations against an individual than it is to issue successful rebuttals, since even the most truthful refutation or debunking of fake news often does not reach the audience that heard the initial allegation. Thus, another rule of information warfare is observed: actions in the physical space are always preceded by actions in the information space, which justify the physical actions.
What Do We Do Next?
In order to understand the multifaceted nature of cognitive resilience and the long-term effect of hostile influence operations, it is advisable to have a public debate, based on which we must come to a clear position on the following issues.
- Our communications clean-up must be based on the understanding that no form of nostalgia makes sense if it destroys the future. The only real way to deflect historical propaganda and today’s disinformation is active prevention, based on building a strong framework of our own. Other methods, such as debunking or rebuttal, do not work, as they lead us into an information arms race, in which we are constantly playing catch-up and are exhausted, which we usually lose and inevitably end up demotivated. As they say, you should never start an argument with an idiot; he will only bring you down to his level and beat you with his experience. The matrices of deliberate disinformation cannot be overcome with lessons in media literacy or with the simple exposure of falsehoods, because this probably achieves the opposite effect; that is, it strengthens belief in the lies and delusions. The key components of our cognitive stamina should be information systems with reliable veracity filters; personal and organisational flexibility that provides a broader set of creative solutions; and applying to join voluntary network structures that overcome and beat hierarchical ones everywhere.
- In the Russian-speaking space, neither the conventional nor the information war has ended but, given the Kremlin’s adventurist geopolitical ambitions, as well as the presence of a wide range of influence tools, the war in one form or another may spread further to other sectors, other languages and other territories. We must realise that we live alongside the embittered and impoverished people who still allow Russia to maintain and justify a state regime that constantly broadcasts and replicates hatred and anger on an industrial scale, both inside the country and outside. In order for such a neighbourhood not to become tragically fateful again, we need to determine what kind of intervention in our national and personal cognitive space we consider acceptable, what kind we can just about put up with, and which one should be actively resisted. Moreover, if in future the aggressive tone of the Kremlin’s actions for some reason decreases and dialogue (which, by the way, is already being pushed: business as usual) replaces the current confrontation, then what should we think about their collective guilt? Should only the Kremlin regime and its minions be held responsible, or the ideologically instinctive adepts of the Russian world (Russki Mir)—or all Russians?
- The breadth of the Kremlin’s aggressive and offensive vulgarity in various spheres of life in other countries is based not only on carefully planned virtual information operations, but also on a network of assorted agents of influence on the ground, as well as on improvised local information channels that mimic the media, thereby abusing our rights and the freedoms of a democratic society. Exposing such agents of influence, their interrelations, sources of material encouragement and ideological background will help to reduce the effectiveness of their malicious activities. This is a very difficult area, as agents of influence can easily lurk among journalists, government experts and officials, independent professionals, political figures, leaders of public opinion, bloggers, scientists and teachers, sports coaches, advisers to various foundations and organisations, businessmen and entrepreneurs, and people in the creative industries (actors, singers, painters and so on), as well as religious figures. In a democratic society, of course, we cannot allow McCarthyism to be unleashed, and such issues should therefore be approached with caution. To identify agents of information influence, there must be a legal basis and a solid base of irrefutable evidence of their malicious actions, ideological connections and possible (material) gain.
By militarising many spheres of life both domestically and externally, Russia is probably preparing for war and is seeking hybrid ambiguity in the perception of many real threats, successfully hiding them beneath peaceful social processes. In examining the Kremlin’s ideological interests and geopolitical appetites, what spheres can we exclude from its orbit of influence with absolute certainty? How often do we hear from certain people “peace-loving” exhortations and slogans like “Don’t demonise Russia” and repeated mantras such as “Keep politics out of sport”, “Culture is not political”, “Do business regardless of politics”, “The Russian Church is not involved in politics”, “You can’t make everything simple—remember that both sides are to blame”, “Don’t surround peaceful Russia with NATO bases”, “Don’t escalate or provoke war”, and so on. The widespread presence of such messages in the information space only means that agents of influence have been working successfully for years. For our cognitive endurance, we must avoid any compromises with such deceitful opportunism and start calling things by their proper name. The “duck test” may help: if something looks like a duck, swims like a duck and quacks like a duck, then it probably is a duck.
This article was originally published by ICDS.