AI-Generated Explicit Deepfakes Damage Politicians’ Perceived Leadership Competence, Trustworthiness and Electoral Prospects

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rise of generative AI has made it easier than ever to produce hyperrealistic but entirely false imagery. One especially harmful use is the creation of non-consensual, sexually explicit deepfakes depicting real people. While such content clearly violates privacy, its broader political implications remain underexplored. This paper investigates how sexually explicit AI-generated imagery affectspublic perceptions of politicians and broader democratic outcomes. Drawing on a pre-registered survey experiment in the U.S. (N = 1904), we exposed participants to either explicit, private, or control images of a fictitious male or female politician embedded in a mock social media feed. The results indicate that explicit deepfakes significantly diminish candidate evaluations, particularly with respect to affect, trustworthiness, perceived competence, and leadership qualities. Exposure to AI-generated non-consensual intimate imagery (NCII) further leads to lower voting intentions for the targeted politician, illustrating how reputational harm can translate into electoral consequences. Contrary to expectations, male politicians were more strongly affected by these negative shifts in perception. However, we find no evidence that exposure to such content reduces participants’ own political ambition. These findings highlight the reputational risks posed by synthetic intimate media and highlight the urgent need to mitigate its potential threats to democratic processes.

Article activity feed