It Works When It Works: Measuring the Direct and Indirect Effects of AI Labels on Political Images

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent advancements in the availability and sophistication of generative artificial intelligence has been accompanied by widespread concerns regarding the ability for the public to navigate the digital information environment, especially during social and political events. A growing consensus across policymakers, academics, and industry leaders has emerged around the need for applying labels communicating whether AI was used in content creation. Labeling as a media literacy strategy and policy intervention has gained momentum, but what impacts do synthetic content labels have on the public? We run two online experiments to measure the effect of content labeling on perceptions of political images—their perceived provenance, veracity, and engagement intention. We find that AI labels effectively communicate content provenance, significantly reducing perceived human involvement in image creation regardless of whether images were actually AI-generated. At the same time, labels have no impact on perceived veracity or self-reported engagement intentions. In a follow on study, we find limited evidence of the "implied authenticity effect," whereby exposure to labeled synthetic images increases the perceived human provenance of subsequent unlabeled synthetic images. However, we show that the near-zero total effect is the result of two offsetting pathways: exposure to labeled synthetic images implies subsequent unlabeled synthetic images are more authentic, but this effect is countered by an increased skepticism caused by prior exposure to labeled content. Taken together, our research contributes to the growing academic literature on content labeling as a media literacy intervention, as well as to ongoing efforts to develop evidence-based strategies to mitigate the harmful political effects of generative AI.

Article activity feed