Countering AI-Generated Misinformation With Pre-Emptive Source Discreditation and Debunking

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Despite concerns over AI-generated misinformation, it is unclear how much people are impacted by such misinformation or how effective countermeasures are. This study examined whether the influence of AI-generated misinformation on people’s reasoning could be reduced by a pre-emptive, source-focused inoculation or a retroactive, content-focused debunking. In two experiments (N > 1000), a misleading AI-generated article influenced people’s reasoning, regardless of its alleged source (human or AI). In both experiments, the inoculation reduced participants’ general trust in AI-generated information, but failed to significantly reduce the specific influence of the misleading article. Additional trust-boosting and disclaimer interventions used in Experiment 1 also had no impact. By contrast, a debunking used in Experiment 2 effectively reduced misinformation impact, although only a combination of inoculation and debunking eliminated misinformation influence entirely. Findings demonstrate that generative AI can be a persuasive source of misinformation, potentially requiring multiple countermeasures to negate its effects.

Article activity feed