Personalized conversations with a trained artificial intelligence (AI) chatbot can reduce belief in conspiracy theories – even in the most obdurate individuals – according to a new study. The findings, which challenge the idea that such beliefs are impervious to change, point to a new tool for combating misinformation. “It has become almost a truism that people ‘down the rabbit hole’ of conspiracy belief are almost impossible to reach,” write the authors. “In contrast to this pessimistic view, we [show] that a relatively brief conversation with a generative AI model can produce a large and lasting decrease in conspiracy beliefs, even among people whose beliefs are deeply entrenched.” Conspiracy theories – beliefs that some secret but influential malevolent organization is responsible for an event or phenomenon – are notoriously persistent and pose a serious threat to democratic societies. Yet despite their implausibility, a large fraction of the global population has come to believe in them, including as much as 50% of the United States population by some estimates. The persistent belief in conspiracy theories despite clear counterevidence is often explained by social-psychological processes that fulfill psychological needs and by the motivation to maintain identity and group memberships. Current interventions to debunk conspiracies among existing believers are largely ineffective.
Thomas Costello and colleagues investigated whether Large Language Models (LLMs) like GPT-4 Turbo can effectively debunk conspiracy theories by using their vast information access and by using tailored counterarguments that respond directly to specific evidence presented by believers. In a series of experiments encompassing 2,190 conspiracy believers, participants engaged in several personalized interactions with an LLM, sharing their conspiratorial beliefs and the evidence they felt supported them. In turn, the LLM responded by directly refuting these claims through tailored, factual and evidence-based counterarguments. A professional fact-checker hired to evaluate the accuracy of the claims made by GPT-4 Turbo reported that, of these claims, 99.2% were rated as “true,” 0.8% as “misleading,” and 0 as “false”; and none were found to contain liberal or conservative bias. Costello et al. found that these AI-driven dialogues reduced participants’ misinformed beliefs by an average of 20%. This effect lasted for at least 2 months and was observed across various unrelated conspiracy theories, as well as across demographic categories. According to the authors, the findings challenge the idea that evidence and arguments are ineffective once someone has adopted a conspiracy theory. They also question social-psychological theories that focus on psychological needs and motivations as the main drivers of conspiracy beliefs. “For better or worse, AI is set to profoundly change our culture,” write Bence Bago and Jean-François Bonnefon in a related Perspective. “Although widely criticized as a force multiplier for misinformation, the study by Costello et al. demonstrates a potential positive application of generative AI’s persuasive power.”
A version of the chatbot referenced in this paper can be visited at https://www.debunkbot.com/conspiracies.
***A related embargoed news briefing was held on Tuesday, 10 September, as a Zoom Webinar. Recordings can be found at the following links:
- Video: https://aaas.zoom.us/rec/share/aoSQ0AgWVHF0l7vE9-6LHHqmiLdxgApjJk_VQekHv7VidXfTZozRZOXxkXm3swi9.YUuogoQ-ZGLnAbnM
- Audio: https://aaas.zoom.us/rec/share/bTiYBoHcxYdKkzivIwYgt_Fd3Qg0Xll0aw_oc6vns03kyqayp-wZ9sbHDBGBSpZY.a41AWWIqSI-QcUqH
The passcode for both is &M67bgdd
Journal
Science
Article Title
Durably reducing conspiracy beliefs through dialogues with AI
Article Publication Date
13-Sep-2024