Deepfake/Real Harms: An online intervention to reduce deepfake abuse perpetration and myth acceptance.

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Deepfake abuse involves the use of generative AI to create non-consensual synthetic intimate imagery (NSII) consisting of pornographic videos with someone’s identity swapped in without their consent. Originally deepfake videos were used as a way to harass and defame celebrity women and activists but there is a growing number of cases of deepfakes targeting ordinary people. While there are existing legal and technological efforts to mitigate perpetration, there are no evidence-based interventions aimed at preventing perpetration behaviour. We designed a short-form online intervention for deepfake NSII. The Deepfake/Real Harms intervention consists of three vignettes and takes approximately 10 minutes to complete, with the aim of increasing empathy towards victims, educating participants about deepfake myths and reducing perpetration intentions. Over three pre-registered experimental studies (N = 1628), we provide evidence of the efficacy and acceptability of the intervention, including tests in high-perpetrating populations. The intervention lowered belief in myths about deepfakes (e.g. that they are not harmful because they’re not real), and partly reduced intentions to perpetrate (e.g. to watch, share, or create deepfake NSII) with the effects on watching behavioural intention still evident at follow-ups ranging from one week to one month. We have made the current intervention freely available online and recommend follow-up work testing adapted versions of the resource with other vulnerable groups such as schoolchildren.

Article activity feed