| Aktuelles | Blog | Fighting fake news on social media: What can be done?

Fighting fake news on social media: What can be done?

Social media have built up an enormous reach, with the number of users per day on major platforms estimated in 2021 at: Twitter 211 million; YouTube 122 million; and Facebook 1.9 billion. At the same time, social media evoke deep concerns about the emergence and spread of fake news. For instance, recent evidence shows that some 50% of users who see fake news on social media say that they believe it.


What can be done? Policy makers combat fake news on social media by promoting fact checkers on the one hand and media literacy initiatives on the other. As one example, the European Union, has recently provided significant resources to set up an official content verification platform and also endorses strategies to increase digital media literacy. These include the launch of a European Media Literacy Week and establishment of the Media Literacy Expert Group. Whilst such activities are certainly well-intentioned, their effectiveness is far from clear, with Empirical evidence on fact checking being inconclusive and media literacy as a tool to fight fake news hardly having been studied.

Fact checking on social media induces three different effects

Two of my recent research projects address this gap in the literature. Firstly, the project Fighting fake news: fact checking and trust in online information sources, which is currently supported by a bidt Seeding Grant (“Anschubfinanzierung”), studies fact checking and its potentially undesired side-effects more thoroughly. In particular, we show that the impact of fact checking goes beyond any specific messages that are being targeted. Rather, fact checking on social media induces three different effects: (i) trust in the message tagged as false decreases (direct effect); (ii) trust in messages not tagged as false increases (implied truth effect); and (iii) trust in the online information source as such decreases (implied reliability effect).

The direct effect is clear – diminishing trust in a message tagged as false is just what we would expect. The implied truth and the implied reliability effect require closer inspection, though. The implied truth effect accrues, because the presence of fact checking induces an ambiguity about untagged messages: Is a message untagged, because it has been checked and found to be correct? Or is it untagged, because it has not been checked (yet)? Thus, in comparison to a setting where fact checking does not exist, untagged messages appear on average as more trustworthy. The implied reliability effect emerges, because a message tagged as false signals that the source is unreliable. Diminishing trust in the information source, in turn, entails diminishing trust in all its messages, tagged or untagged. Hence, the implied reliability effect constitutes an indirect (or spill-over) effect on the trustworthiness of messages on social media that adds to the impact of the direct or implied truth effect, respectively. Surprisingly, the implied truth effect’s occurrence has never been studied before.

Accordingly, the aim of our project is to document the existence of the direct, implied truth and implied reliability effects. Moreover, once we have confirmed their existence, we want to explore how strong they are. We will accomplish this goal in two steps: Firstly, we will develop a theoretical model that captures the ideas from above. Secondly, we will use a large-scale online experiment to test our model’s predictions. By way of an update on the project’s progress, the theoretical model and design of the experiment have already been completed and thus we plan to run the experiment later in Spring.

Media literacy could be an alternative remedy against fake news

Are there any means of combatting fake news on social media beyond fact checking? We address this question in a related research project, where we conduct an online experiment to compare the short- and longer-term effects of fact checking to the impact of a brief media literacy intervention in terms of ten tips to spot fake news (e.g., “Be sceptical of headlines.”). These tips already exist on Facebook, but are not automatically displayed to users and extremely hard to find. Our experiment aims to show that the fact checking intervention is limited to the specific fake news that is being targeted. By contrast the ten tips help users to distinguish between false and correct information more generally, both immediately and even two weeks after the tips were displayed. Any tips generally raise participants’ awareness of fake news, whereas fact checking fails to do so. Our results thereby promote brief media literacy interventions on behalf of the government or social media themselves as an effective alternative tool in the fight against fake news.

Summing up, the spread of fake news on social media is a huge concern that is likely to persist. Yet, recent empirical evidence shows that effective tools to fight fake news do already exist. Fact checking works under the right circumstances, but is limited to the specific fake news that is being targeted. Research on enhancing users’ digital media literacy is still in its infancy, but the results from our experiment promote it as an effective addition to fact checking that would be cheap, scalable and easy-to-implement by social media platforms.

The blogs published by the bidt represent the views of the authors; they do not reflect the position of the Institute as a whole.