To project overview

How is digital misinformation spread?

Annual Review Article 2020

US President Donald Trump warns of climate “prophets of doom” in his speech at the 2020 World Economic Forum in Davos. Photo: EU, 2020. EC - Audiovisual Service

Misinformation is not just a security issue but influences debates on health, climate, energy and culture. The “Digital Disinformation” research project studies the reach and effects of misinformation, contributing to our understanding of a politically sensitive phenomenon.

By Rebecca Adler-Nissen, professor, PhD, Department of Political Science, University of Copenhagen

In the USA, a man died, and his wife became seriously ill after taking the product chloroquine phosphate, which they had understood from President Trump could protect them from becoming infected with coronavirus. 

And in the United Kingdom and the Netherlands, 5G masts have been burnt down based on a conspiracy theory that the Chinese city of Wuhan was struck by coronavirus because it had recently rolled out the 5G network. 

Fig. 1 Google searches for “disinformation” 5 May 2019 – 5 May 2020.

As early as the end of January 2020, WHO warned that misinformation was one of the biggest global threats to managing COVID-19. But what is misinformation? How is misinformation spread, and when is this done strategically (and becomes disinformation)? What is the extent of the problem, and what are the consequences? 

These are the key questions addressed in the “Digital Disinformation” research project. Since the 2016 US presidential election campaign, the deliberate spreading of fake news has been fiercely debated and is subject of widespread scientific interest. 

After COVID-19 was discovered in November 2019, interest rose markedly, as illustrated in fig. 1.

How do we measure misinformation?

We know surprisingly little about digital misinformation, for several reasons: One challenge is that it is difficult to isolate exposure: misinformation is spread in mass media, social media and by word of mouth. 

Are people genuinely affected by the misinformation simply because they click on a website, or is it just noise?

Many researchers use surveys, i.e. self-reporting, but this is a relatively imprecise method. 

Another approach is to recruit a group of citizens to voluntarily have their internet usage measured, but – ethical and practical challenges aside – there are other methodological pitfalls: are people genuinely affected by the misinformation simply because they click on a website, or is it just noise? 

Types of misinformation

Misinformation stands for incorrect or inaccurate information but says nothing about the actual intention behind or motivation for producing and spreading this information.
Disinformation, however, differs from misinformation not just by being inaccurate, but by being information intended to be inaccurate and produced specifically to mislead, deceive or confuse. Disinformation is a conscious desire to mislead and not just due to an accident or an honest mistake.
Fake news and disinformation share the characteristic that it is intended as inaccurate or misleading. The distinct feature of fake news it that this type of disinformation pretends to be journalism.
Rebecca Adler-Nissen, Frederik Hjorth and Yevgeniy Golovchenko: Digital misinformation: How does it work? (in Danish)
Puzzle piece, Political Ideas and Analysis, Department of Political Science, University of Copenhagen

Another challenge is how to define misinformation. Most researchers use fact-checking organisations, which categorise websites and news channels spreading misinformation. 

The problem is that not everything on these websites is actually misinformation. Hence, there is a risk of misrepresenting the volume of misinformation.

In the real world, there is, of course, a grey zone, which is neither explicit misinformation, but nor is it factual information. In our research so far, we have therefore chosen a cautious strategy. Our research takes its starting point in concrete stories where we know for a fact that misinformation is involved. 

Our approach is more time consuming but, it is also more precise. For example, one of our cases is the crash of flight MH17 over eastern Ukraine in 2014. 

A Dutch-led international team of investigators concluded that Russian-backed separatists were behind the attack (and not e.g. Mossad, the CIA or the Ukrainian army, as claimed by some on social media). 

In terms of methodology, we draw on a number of different disciplines and methods, including political psychology, natural language processing, computer science and sociology. 

We use machine learning and natural language processing on social media data (e.g. tweets) to identify patterns. We also use social network analyses to map the spread of misinformation and identify ‘super spreaders’. Finally, we make use of experiments, interviews and participant observation.

Misinformation affects specific sections of the Population

Our research shows that not all citizens are equally exposed to misinformation. For example, we studied the group of people who were exposed to pro-Kremlin misinformation on Twitter in connection with the MH17 crash over Ukraine. 

The underlying material is based on more than 8,000 Twitter accounts that all tweeted about the flight crash in 2014, and data on their 12.5 million-plus followers. 

Fig. 2 The probability of following a Twitter account dispersing pro-Russian misinformation (generally and about the plane crash in 2014).

The study focuses on US citizens’ exposure to misinformation, as we have access to particularly detailed data on US Twitter users. There is a clear ideological asymmetry in who was exposed to pro-Kremlin misinformation in connection with the flight crash (fig. 2). 

The users furthest to the right belong to a small and ideologically extreme group to the right, for example, of President Trump. One possible explanation is that mistrust of mainstream media, which is higher on the US right wing, translates into using alternative news sources where misinformation flourishes. It is not only political opinion that shapes the spread and consumption of misinformation. 

Several studies show that elderly citizens are more frequently exposed and themselves contribute (possibly unwittingly) to the spread of misinformation. We see the same pattern in our own study of pro-Kremlin disinformation. 

Exposure to misinformation on the MH17 crash (as a proportion of all relevant information) rises from approx. 3 pct. to approx. 23 pct. as we move from the youngest to the oldest subjects in our sample. We hope to be able to investigate whether age is also a significant factor in Denmark.

What is the situation in Denmark?

We have conducted an experiment in Denmark to investigate whether it is possible to correct people’s perceptions after they have been misinformed, or whether there is still an echo of misinformation. Our investigation found no signs of “echoes” of the misinformation – an interesting result, which goes against most existing international research on misinformation. 

We have also studied Danish people’s use of YouTube during the corona crisis. The video-sharing platform is used daily by 27 pct. of Danes, making it the country’s second most popular social media platform, topped only by Facebook. 

We found few misleading videos about COVID-19. The most popular YouTube videos in Denmark about coronavirus were either informative videos or films making fun of people who do not understand or follow government instructions about handwashing and social distancing. 

Of course, old wives’ tales and conspiracy theories flourish in Denmark as they have since the dawn of time, and to a great extent on social media. However, we still do not know precisely what characterises the groups that engage in fake stories, or the significance of factors such as age, gender, political opinion, etc. 

The relatively low degree of polarisation in Denmark, coupled with a high level of education and public service media, most likely plays a role in determining the extent to which digital misinformation is disseminated among the Danish public.

STAY CURIOUS: Digital diplomacy

Censorship by tech giants and deepfakes

How is misinformation handled? In Victor Orbán’s Hungary, you can now be imprisoned for up to five years for sharing misinformation during the coronavirus pandemic. 

And, in 2018, France passed a controversial law giving French judges the right to order immediate removal of fake news during election campaigns. Today, the most important measures against misinformation are taken by big tech. 

Today, the most important measures against misinformation are taken by big tech

All the larger tech platforms now have systems that automatically – often based on user complaints – remove content that incites violence, for example. 

The tech giants have gone further with the coronavirus, collectively deciding to promote authoritative sources in searches and recommendations in connection with the crisis. This means users are directed to the Danish Health Authority if using YouTube, Twitter, Facebook, TikTok or Google in Denmark. 

Our next step is to investigate the drop in the volume of misinformation after the tech giants actively started removing misinformation on Covid-19. 

In addition, we will focus on a new generation of misinformation: deepfakes, which are fake videos of such high quality that it is impossible for the naked eye to distinguish true from false. We hope this will contribute to a deeper understanding of how misinformation develops and is spread, and how we can best handle it.