Just the facts: How dialogues with AI reduce conspiracy beliefs

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Conspiracy beliefs are widely considered resistant to factual correction, yet recent research shows that relatively brief, personalized “debunking” dialogues with a generative AI model can substantially reduce such beliefs. To identify the mechanisms driving this effect, we conducted an experiment spanning eight treatment arms that varied key features of participants’ interactions with GPT-4 during such debunking dialogues (N = 1,297). The debunking effect proved robust across most manipulations—including whether participants were explicitly told the AI aimed to change their minds, were asked to debate the AI, or whether the AI offered them factual information without otherwise seeking to persuade or was concise in its exposition. The only condition that undermined the debunking effect was prompting the AI to persuade participants without presenting any counterevidence, which yielded a null effect. Furthermore, analyses of the AI’s persuasive strategies identified reasoning-based tactics as the sole significant mediator of belief change. Participants who reported feeling persuaded overwhelmingly cited the AI’s rational, evidence-focused approach. Finally, participants higher in actively open-minded thinking showed larger treatment effects. These findings suggest that AI-driven interventions reduce conspiracy beliefs principally by providing factual, targeted counterarguments that address the specific reasons people hold these beliefs.

Article activity feed