Artificial Influence: Comparing the Effects of AI and Human Source Cues in Reducing Certainty in False Beliefs

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

People often resist updating their beliefs, even when confronted with strong evidence. While some research suggests artificial intelligence (AI) could be a solution to this problem, its persuasive capacity remains under-explored. This pre-registered study examines whether large language models (LLMs) can reduce belief certainty among a sample of N=1,730 Americans, all of whom held at least one false or unsupported belief. Treated participants engaged in up to five rounds of conversation with ChatGPT-4o, with the treatment manipulating who participants were told they were speaking with: ChatGPT, an expert on the topic, or a fellow survey respondent who disagreed with them. Across all conditions, we find that the conversation with AI reduced participants' certainty in their false or unsupported beliefs, with 29\% of participants even persuaded to switch to an accurate belief. However, ChatGPT as a source label did not contribute to this persuasive capacity. The reduction in belief certainty was not significantly greater for the ChatGPT label compared to a fellow survey taker label, though it was for the expert label. Our findings point largely to message effects—even when brief, multi-round conversations with AI have a clear influence, emphasizing the significance of both the content and interactive nature of these conversations. However, the role of source cues remains critical, with the appeal of a human expert source label over a ChatGPT label raising important questions with respect to the use of AI as a tool for persuasion.

Article activity feed