Artificial Influence? Comparing AI and Human Persuasion in Reducing Belief Certainty
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
People often resist updating their beliefs even when those beliefs are contradicted bystrong evidence, making efforts to persuade them seem futile. While some new researchsuggests AI could be a solution to this problem, its persuasive capacity remains under-explored. This pre-registered study tests five hypotheses by examining whether LargeLanguage Models (LLMs) can reduce belief certainty for a sample of N=1,690 Americansrecruited through CloudResearch, all of whom hold at least one false, or unsupported,belief. All treated participants engaged in up to five rounds of conversation with ChatGPT-4o, but the treatment manipulated who they believed they were talking to: ChatGPT, anexpert on the topic, or a fellow survey respondent who disagreed with them. Across allconditions, we find that AI reduced participants’ certainty in their false or unsupportedbeliefs, with 29% of participants even persuaded to switch to the accurate counterpart of thebelief. Interestingly, ChatGPT does not have a significantly larger effect on reducing beliefcertainty than a fellow survey taker, but an expert does. We do not find that perceptions ofAI objectivity and knowledgeability serve as moderators for the AI condition, and neitherdoes anti-intellectualism for the expert condition. In shifting the focus to the messenger,our results contribute to our understanding of effective strategies for persuasion. We showthat AI can indeed be persuasive, even in the face of strongly held beliefs; however, whensource identity is considered, human experts hold a much stronger appeal.