“Transparency is More Than a Just Label": Audiences’ Information Needs for AI use Disclosures in News

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study evaluates the role of transparency in rebuilding trust in journalism amid the increasing integration of generative AI (GenAI) tools in news production. While AI technologies enhance journalistic workflows by automating and augmenting content creation, they also introduce opacity in professional decisions, complicating traditional transparency norms and its disclosure practices. The paper maps citizens’ perceptions regarding disclosures of AI use in journalism, addressing two key themes: (1) the practical information needs of news consumers about AI-generated content and (2) the specific rationales when exposed to (disclosures of) AI generated news content. Through qualitative focus groups (N=21), the exploratory research reveals a strong demand for clear, visible, and detailed disclosures about AI-generated content. Participants emphasized the necessity of including general source references akin to traditional authorship attributions, explicitly stating AI involvement - for example, a label such as ‘generated by AI’ alongside author and publication details. Visual indicators like logos or watermarks in contrasting colors were preferred to ensure AI disclosures are noticeable and not easily overlooked. This study contributes to developing more granular audience-centered, practical guidelines for AI transparency in journalism that goes beyond the mere ‘label’, emphasizing that effective disclosure requires more than simply ‘informing’ audiences.

Article activity feed