AI reduces conspiracy beliefs even when presented as a human expert
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Although conspiracy beliefs are often viewed as resistant to correction, recent evidence shows that personalized, fact-based dialogues with artificial intelligence (AI) can reduce them. Is this effect driven by the debunking facts and evidence, or does it rely on the messenger being an AI model? In other words, would the same message be equally effective if delivered by a human? To answer this question, we conducted a preregistered experiment (N = 955) in which participants reported either a conspiracy belief or a non-conspiratorial but epistemically unwarranted belief, and interacted with an AI model that argued against that belief using facts and evidence. We randomized whether the debunking AI model was characterized as an AI tool or a human expert and whether the model used human-like conversational tone. The conversations significantly reduced participants’ confidence in both conspiracies and epistemically unwarranted beliefs, with no significant differences across conditions. Thus, AI persuasion is not reliant on the messenger being an AI model: it succeeds by generating compelling messages.