The Unintended Consequences of Labeling AI-Generated Media Online
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Media platforms have recently introduced initiatives to label AI-generated media, aiming to increase transparency about content creation. Yet such efforts may carry unintended consequences. AI-generated media often accompany informational content that can vary in veracity. However, labeling may confound perceptions of the media's authenticity and the content's veracity, reducing belief in true information. Moreover, since it isn’t feasible to label all AI-generated media, partial labeling may lead people to assume that the absence of a label implies authenticity and/or veracity. We test for these labeling and implied effects in two survey experiments (N = 11,044), where respondents evaluated political news posts. Labeling decreased perceptions of the authenticity of AI-generated images but also lowered belief in and willingness to share posts—even when the associated claims were true. Furthermore, exposure to partial labeling increased the perceived authenticity of unlabeled content. These results highlight the need for carefully designed labeling practices online.