Tricky Trade-Offs on a Transparency Spectrum: How the Financial Times Approaches Transparency about AI Use in News
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As news organisations adopt artificial intelligence (AI), they face growing pressure to be transparent about when and how it is used – yet practical approaches remain uneven. This paper examines AI transparency through an in-depth case study of the Financial Times (FT). Drawing on 13 semi-structured interviews with 12 senior managers across editorial, product, data science, and communications, together with internal documents, we show that the FT approaches transparency as a hybrid of policy, process, and practice, framed by a desire to safeguard both internal and external trust. Transparency is calibrated to context: internally, AI use is signposted in tools and reinforced through training and personal accountability; externally, the prominence of disclosure scales with system autonomy and editorial oversight, with stronger labelling for no-human-in-the-loop features than for AI-assisted, journalist-edited outputs. We identify nine factors that shape audience-facing disclosure – legal/provider requirements, industry benchmarking, nature of the task, human oversight, system novelty, audience expectations, perceived risk, commercial sensitivities, and design constraints – and five crosscutting challenges, including site-wide consistency (especially on mobile) and potential “transparency backfire.” Conceptually, our analysis links AI transparency to isomorphic pressures and to intersecting institutional logics. We argue that AI transparency in news is best understood as a spectrum, evolving with technological advancements, commercial, professional and ethical considerations and shifting audience attitudes.