Countering AI-Generated Misinformation With Pre-Emptive Source Discreditation and Debunking
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Despite widespread concerns over AI-generated misinformation, it is unclear how much people are impacted by such misinformation or how effective countermeasures are. This study examined whether the influence of AI-generated misinformation on reasoning could be reduced by a pre-emptive, source-focused inoculation or a retroactive, content-focused debunking. In two experiments (total N = 1,223), a misleading AI-generated article influenced people’s reasoning, regardless of its alleged source (human or AI). In both experiments, the inoculation reduced participants’ general trust in AI-generated information, but failed to significantly reduce the specific influence of the misleading article on reasoning. Additional trust-boosting and disclaimer interventions used in Experiment 1 also had no impact. By contrast, debunking misinformation in Experiment 2 effectively reduced its impact, although only a combination of inoculation and debunking eliminated misinformation influence entirely. Findings demonstrate that generative AI can be a persuasive source of misinformation, potentially requiring multiple countermeasures to negate its effects.