Misinformation Detection by AI Chatbots and Humans

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Scientists, practitioners, journalists, and laypeople have expressed concerns about misinformation produced with generative artificial intelligence (AI), because AI can produce misinformation that appears more credible than misinformation produced by humans. Yet, little is known about how AI compares to humans in detecting misinformation. To address this question, two preregistered studies compared 15 major AI chatbots to human participants (N = 2,461) with regards to three factors (i.e., truth discernment, acceptance threshold, slant bias) in the detection of political misinformation (Exp. 1) and misinformation about vaccines (Exp. 2). The results show that, for both topics, AI chatbots discerned true from false information with much greater accuracy than U.S. Democrats and Republicans (political misinformation, Exp. 1) and human participants with favorable, unfavorable, or neutral vaccine attitudes (vaccine misinformation, Exp. 2). AI chatbots also outperformed the “wisdom of crowds”, an approach that has been shown to approximate veracity judgments of professional fact-checkers. At the same time, many AI chatbots showed large biases comparable to those of human participants, showing tendencies to accept or reject information regardless of veracity and tendencies to favor information with a certain slant. Overall, AI chatbots showed large variation in all three factors of misinformation detection. The results challenge conceptions of AI chatbots as hallucinating lie-machines, while drawing attention to biases and large differences between chatbots.

Article activity feed