Reducing belief in conspiracy theories as they unfold using large language models

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The emergence of conspiracy theories in the wake of major events is a significant societal challenge. Using the July 2024 assassination attempt on Donald Trump as a case study, here we test whether conversational dialogues with a large language model (LLM) can reduce belief in immediately unfolding conspiracies. In an experiment conducted in the days following the assassination attempt (N = 789 U.S. adults), participants holding conspiratorial views engaged in a five-turn conversation with an LLM designed to reduce their conspiracy belief (for example, some Democrats thought the attempt was faked, while some Republicans thought it was an inside job). Compared to control participants who either discussed an irrelevant topic with an LLM or viewed a static fact sheet about the assassination attempt, participants in the LLM treatment showed significantly reduced conspiracy beliefs (d = 0.38). Two months later, after a second assassination attempt on Donald Trump, treated participants were half as likely to believe conspiracies surrounding this new event. Linguistic analyses indicate the AI achieved this by promoting critical thinking and epistemic caution, rather than by only providing factual rebuttals. These results highlight the potential for scalable, cognitively-focused interventions to counteract misinformation in the immediate aftermath of high-profile societal events.

Article activity feed