Feeling iffy about generative AI: When journalists disclose AI use, trust in news is lower

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

News organisations are experimenting with how to best integrate generative AI into their journalistic workflows. This brings up important questions about how to disclose this, as well as what effects such AI disclosures have on readers. Prior research shows predominantly negative effects on perceived trustworthiness and credibility, but says little about how different use cases compare to each other. In this study, we report the results of a conjoint experiment (N = 683) on the effects of nuanced AI disclosures on the perceived trustworthiness of news. Our results confirm prior research in that we find negative effects for all kinds of AI disclosures. However, moderation and cluster analysis suggest that these effects are not universal, but depend on individual-level characteristics that co-determine AI disclosure effects. By (1) highlighting important individual-level moderators such as respondents’ political position as well as their attitudes towards and knowledge of AI, and (2) by describing five distinctive preference profiles and their predictors, our results inform future research and help practitioners cater AI disclosures to particular groups of readers.

Article activity feed