Regulating Authenticity in Artificial Times: Synthetic interactions, democratic risks, and the limitations of the AI Act’s transparency obligations.

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The AI Act wants to mitigate the democratic risks that arise when people can no longer reliably distinguish synthetic from human-generated and authentic content. This manuscript critically examines the regulatory function the AI Act assigns to the notion of “authenticity” in addressing these risks. An ambiguous and contested notion, authenticity broadly captures the correspondence between what a particular thing or object is and what it claims to be. This paper first examines how (perceptions of) authenticity are constructed (What does authenticity entail? Section 2) and second how these perceptions may inform people’s (1) assessment of journalistic content, (2) evaluation of political communication, and (3) capacity for self-expression and identity formation (Why does authenticity matter from a democratic perspective? Section 3). We then analyse how the use of Generative AI can affect (both positively and negatively) perceptions of authenticity in these three domains (How does AI affect authenticity? Section 4). In a final step, we use this framework as a critical lens to reappraise the protection Article 50 of the AI Act affords, and its transparency and disclosure obligations (Does Article 50 AI Act effectively protect people against unwarranted authenticity-distortions? Section 5). Our analysis demonstrates why the Article 50 AI Act fails to empower citizens against the democratic risks synthetic media pose. At a foundational level, the AI Act conflates artificiality with inauthenticity and untrustworthiness, thereby undermining its intended ambition of safeguarding the democratic integrity and foundations of the information society. At an operational level, the law’s disclosure obligations lack sufficient granularity to enable citizens to engage with and critically assess the outputs of generative systems. Transparency strategies that relay information about why AI was used and for what purposes, the data underpinning content generation, and the (professional) values of the actor responsible for the content could be more empowering. At the same time, we should remain cautious: people might rely on the anonymity that AI affords to express themselves or spread political messages. In these cases, far-reaching disclosure obligations could be counterproductive.

Article activity feed