Countering AI-Generated Misinformation With Pre-Emptive Source Discreditation and Debunking

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Despite widespread concerns over AI-generated misinformation, its impact on people’s reasoning and the effectiveness of countermeasures remain unclear. This study examined whether a pre-emptive, source-focused inoculation—designed to lower trust in AI-generated information—could reduce its influence on reasoning. This approach was compared with a retroactive, content-focused debunking, as well as a simple disclaimer that AI-generated information may be misleading, as often seen on real-world platforms. Additionally, the extent to which trust in AI-generated information is malleable was also tested with an intervention designed to boost trust. Across two experiments (total N = 1,223), a misleading AI-generated article influenced reasoning regardless of its alleged source (human or AI). In both experiments, the inoculation reduced general trust in AI-generated information, but did not significantly reduce the misleading article’s specific influence on reasoning. The additional trust-boosting and disclaimer interventions used in Experiment 1 also had no impact. By contrast, debunking of misinformation in Experiment 2 effectively reduced its impact, although only a combination of inoculation and debunking eliminated misinformation influence entirely. Findings demonstrate that generative AI can be a persuasive source of misinformation, potentially requiring multiple countermeasures to negate its effects.

Article activity feed