News Release

Giving people 'digital literacy' tips can help them spot dubious information online

Peer-Reviewed Publication

University of Exeter

Giving people "digital literacy" tips can help them identify dubious information online, a new study shows.

The avalanche of online content available to people around the world challenges humans' ability to separate fact from what can be highly toxic and even dangerous fiction.

Researchers studying Facebook's efforts to educate users on how to spot misinformation have found people in the United States and India were less likely to say a false headline was true after they were exposed to tips on how to spot misinformation

The Princeton University-led study, published in the Proceedings of the National Academy of Sciences (PNAS), shows people's ability to spot erroneous information weakened over time, so digital literacy needs to be taught with regularity.

"Most people struggle to reliably evaluate the quality of information they encounter online, even under the most ideal conditions," said Andy Guess, assistant professor of politics and public affairs at Princeton University.

"This is because they lack the skills and knowledge required to distinguish between high and low-quality news content. We find that effort to promote digital literacy can improve people's ability to evaluate the accuracy of online content."

This study is among the first to systematically explore the role of digital media literacy shortfalls.

"Digital literacy and media literacy are often proposed as solutions to online misinformation, but these approaches are not evaluated as often as they are proposed. If we want to develop tools that help people distinguish truth from falsity, it is essential to test the effectiveness of these tools," added Professor Jason Reifler from the University of Exeter.

The team originally set out to explore why people fall victim to misinformation, selecting the United States and India as both countries have struggled with misinformation campaigns, especially during national elections.

The team looked at the effects of Facebook's "Tips to Spot False News," which appeared at the top of users' news feeds in 14 countries in April 2017. The list was also printed as a full-page ad in many U.S. newspapers, and a version appeared in India as well.

These tips have likely been the most widely distributed digital-media literacy intervention. They also are not overly complex, allowing for quick decision-making. For example, one tip cautions readers to be skeptical of headlines, warning that if claims sound unbelievable, they probably are.

The researchers then employed a "two-wave panel design," studying the same group of people immediately after exposure to the tips, and then again several weeks later, allowing them to see whether the digital media literacy efforts took root over time.

Participants were exposed to the tips and then presented with the same series of mock headlines, which they rated for accuracy. The headlines were balanced in terms of partisan slant, well-known and lesser-known media outlets, as well as low-quality and mainstream content. While the tips were offered to respondents, they couldn't be forced to read them, so the researchers took this into account in their modeling.

This two-wave design was conducted online in both the U.S. and India, though in-person interviews were also conducted in areas of rural India where there is greater religious polarization and potentially higher risk of misinformation spread.

The team found that the intervention improved people's ability to discern between mainstream and false news headlines by 26.5 per cent in the U.S. and 17.5 per cent in India. In the U.S., this lessened but remained measurable several weeks later. One-third of participants also were more likely to point out a less accurate headline. Their ratings of false headlines as "very accurate" or "somewhat accurate" went from 32 per cent to 24 per cent.

While the online results between the two countries were similar, India's face-to-face interviews yielded different results. There was no evidence that exposure to the tips increased the perceived accuracy of mainstream news articles. However this group had much less experience with evaluating news headlines online.

The researchers listed a few caveats with their work. First, the effects were modest, and the intervention did not completely eliminate belief in false news headlines. The effects also decayed over time, suggesting the need for regular reinforcement of these lessons. Lastly, it's not clear whether everyone actually read the tips.

The study provides opportunities for future research. Rather than using an intervention done by a tech company, academics could take this into their own hands by sampling people in other countries and electoral contexts. Likewise, more intensive training models could be used to see if the effects are more durable.

"We see no reason why this wouldn't work for any type of misinformation. Currently there are sources spreading misleading or even dangerous information about Covid-19 regarding protective measures, vaccines, miracle cures. We think this intervention could work in the public health domain, as well," Guess said.

###

Other academics involved in the study are Michael Lerner of the University of Michigan, Benjamin Lyons of the University of Utah, Jacob M. Montgomery of Washington University in St. Louis, Brendan Nyhan of Dartmouth College and Neelanjan Sircar of Ashoka University


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.