Short dialogues with AI reduce belief in antisemitic conspiracy theories
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Antisemitic conspiracy theories have been central to anti-Jewish prejudice for centuries. Given their longevity and deep ties to religious, ethnic, and ideological identities, debunking them presents a particularly difficult challenge. Here, we test whether having believers discuss their chosen antisemitic conspiracy with a large language model (LLM) prompted to debunk such conspiracies can reduce belief and improve attitudes toward Jews. In a preregistered experiment (N = 1,224 U.S. adults endorsing an antisemitic conspiracy theory), participants were randomized to a dialogue with an LLM (Claude 3.5 Sonnet) prompted to debunk their belief, or one of two control conditions. The debunking dialogue substantially reduced belief in antisemitic conspiracies relative to controls, and increased favorability toward Jews among initially unfavorable participants. These findings show that even deeply rooted, identity-linked conspiracies can be effectively debunked through factual correction, offering new insight into prejudice reduction and suggesting that LLM chatbots may help reduce antisemitism at scale.