Detection and Spill-Over Effects of AI-Generated Images in Political Messages: Evidence from Two Pre-Registered Experiments
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
AI-generated images may not only be used by malicious actors in political communication, but also by (photo)journalists or NGOs who cannot or—for ethical reasons—do not wish to rely on photographs to document political events. Yet, using synthetic visuals can potentially erode message and source trust, and may mislead audiences, especially when disclosures are missing. Addressing these issues, this article reports findings from two pre-registered between-subjects experiments (NTotal = 890) among German-speaking individuals on detection and potential spill-over effects of AI-generated images, in which respondents were exposed to posts by fictional NGOs featuring real photographs or (un)labeled AI-generated images. The mostly young and highly educated sample in Study 1 showed strong detection skills, but Study 2—using a quota-based sample—revealed that the average person struggles to identify AI-generated images without disclaimers. Though labeling can significantly improve detection, it can also reduce message and source credibility among people who are distant from the political center, suggesting that the use of AI-generated images by actors like NGOs is likely punished by individuals at the political margins. Meanwhile, the level of images’ probative value (i.e., their apparent evidentiary power) did not affect reactions to AI-generated visuals. Perceptions of these images may thus not depend on the degree of documentation that they purport to provide. Considering the practical relevance of these findings, this article highlights not only conceptual contributions of this work, but also implications for political actors, policymakers and media education.