How can we decide whether a piece of news is true or false? How often do we check the facts, and how readily do we believe the information we receive? To what extent do our offline and online communities influence our ability to detect misinformation?
Misinformation and disinformation can have devastating consequences. The Covid-19 pandemic has highlighted this issue even further: incorrect information can even result in deaths that could be prevented. This is why policy makers, news providers and social media platforms are keen to adopt tools to combat the diffusion of fake news. In order to overcome this challenge, it is important to understand the reasons why fake news propagates.
Dr. Nicole Tabasso is a lecturer at the University of Surrey (UK), currently on leave until 2022 to carry out her research in Venice as a Marie Skłodowska-Curie Actions Fellow at the Department of Economics. Dr. Tabasso’s research project, ION: Information Diffusion on Networks, which is supervised by Professor Sergio Currarini, studies the diffusion of information online in order to contribute to the fight against the spread of fake news. Dr. Tabasso’s study addresses the issues which are also being studied at Ca’ Foscari by researchers such as Fabiana Zollo at the Department of Environmental Sciences, Informatics and Statistics.
Dr. Tabasso, why does fake news spread and why do people share it?
Having access to information is crucial because we base our decisions on what we know and on what we believe to be true. People are willing to pay to receive information and make better decisions, by purchasing newspapers or subscribing to news providers. However, a lot of information is now exchanged online through casual interactions between users, and this can have an impact on the way we act and on the diffusion of fake news.
Let’s consider what we know about Covid-19: information keeps changing as new data is collected, so the “truth” evolves over time. That makes it easy for incorrect information to spread. Even when the truth is “stable” and has been accessible for years, fake news keeps spreading online: consider the people who deny the link between HIV and AIDS, or those who believe the repeatedly disproved claim that the MMR vaccine may cause autism.
People generally share information that they believe to be true, especially with loved ones, like family members and friends. So ultimately, both accurate and inaccurate information is propagated by people who have good intentions, and once a piece of fake news is out there, it is difficult to eradicate it.
While people may have good intentions, they also tend to be biased - before we receive new information, we hold certain beliefs about the world, and we think that these are correct. We all have a tendency to find information that confirms our beliefs to be more trustworthy than information that opposes it.
In order to understand how fake news spreads, we used a model similar to the ones used to trace the diffusion of diseases. This is useful for us to analyse how two different messages on the same topic — one correct and one incorrect — spread online, even when people could verify this information. The verification process can be unsuccessful if people do not engage in it because it requires too much time or effort, or if there is too much or conflicting information. So what do we do if we cannot verify a piece of information? We stick to our bias, and that is what we end up sharing. There is some evidence from online social networks that people may stick to their opinions even if contrasted with opposing evidence (see, e.g., Zollo et al, 2017). In the end, the combination of biases and good intentions are enough to explain why even verifiable rumors keep spreading.
Has the amount of fake news increased because of social media?
Some studies seem to show that this is the case: fake news is more likely to spread on social media than through mainstream media. A possible reason for this is that, on social media, people can express their opinions without any filters — and of course this is not the case with “traditional” media. Therefore, information is spread in an “unregulated” way and this may raise our risk of encountering fake news. However, this theory is difficult to prove, because we would need to have data on the content of everyday conversations before the age of social media.
Online communication has made communication easier, and we find in our study that this means that both correct and incorrect information can spread further. This is something that is missing from public discourse: there is a lot of discussion on the spread of fake news in our digital age but not on that of truthful information. Being uninformed can be just as harmful as believing something that is wrong. For example, people unaware of the existence of Covid-19 are likely to act just like those who believe the virus is harmless (or even deny its existence). Similarly, if somebody is unaware of the existence of HIV and its link to AIDS, they are just as likely to protect themselves against it as somebody who denies this link.
There are also indications that social media tends to increase homophily — the tendency to meet and interact with people who are similar to us from all over the world. This means we are more likely to be exposed to information we “like” and verify it less, which means we are more likely to share unverified information. That helps rumors to spread.
What are the possible solutions?
The main aim of this research, which is still far from finished, is to understand why rumours spread, in order to reduce their diffusion.The main result of this research so far is that the most effective policy is to enable people to check information easily and effectively. Therefore, websites and policy makers should strive to provide people with fact-checking websites or services. This is what some newspapers are currently doing, such as the BBC, The New York Times and Le Monde, which offer fact-checking pages that analyse news and support the fact-checking process.
Other attempts have been less successful, such as the red flag once used by Facebook from December 2016 to December 2017 to indicate that some information has been disputed. According to Facebook, this system was not effective at reducing misinformation, because it gave disputed news even greater visibility. In 2017 Facebook also tested an approach that provided users with “related articles” that they could use to verify what they were reading. This is a more effective approach.
To me personally, perhaps the most important goal to work towards is increasing information literacy. This is something that national governments could even start to insert in the educational curriculum: if people learned how to recognise potentially fake information, that would be a step in the right direction. Teaching people how to engage critically with information is a long-term project.