No easy fix to countering AI-generated visual disinformation: The (in)effectiveness of AI-labels, fact-check labels and community notes
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As generative AI makes it easier to create synthetic visuals, AI-driven visual disinformation isbecoming more common on social media. However, while much research highlights its potentialharm, less is known about how to reduce its potential to mislead. In this study, we thereforeconducted a preregistered online experiment in the Netherlands (N=1,018) to test the effectivenessof various platform interventions: (1) AI labels or “watermarks,” (2) fact-check labels, and (3)community notes. We tested how effective these sources are in lowering credibility of the falsevisual and belief in the false claim it portrays across two polarizing topics: climate change andimmigration. Overall, the interventions showed no significant differences in effectiveness. Thiswas the case when pooling both topics together and for climate-change related disinformation inisolation. However, for visual disinformation about immigration, community notes were mosteffective, especially among participants with strong anti-migrant views. Our findings suggest thatwhile labeling has limited impact overall, its effectiveness varies by context, and no one-size-fits-all solution exists for combating AI-generated visual disinformation.