Seeing is Believing: The Continued Influence of Known AI-Generated ‘Deepfake’ Videos
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Advances in artificial intelligence (AI) mean it is becoming easier to create highly realistic deepfake videos, which can appear to show someone doing or saying something they did not in fact do or say. Deepfakes may present a threat to individuals and society: for example, deepfakes could be used to influence elections by discrediting political opponents. Psychological research shows that people’s ability to detect deepfake videos varies considerably, making us potentially vulnerable to the influence of a video we have failed to identify as fake. However, little is yet known about the potential impact of a deepfake video which has been explicitly identified and flagged as fake. Examining this issue is important because current legislative initiatives to regulate AI emphasize transparency. We report three preregistered experiments in which participants were shown a deepfake video of someone appearing to confess committing a crime or a moral transgression, preceded in some conditions by a warning stating that the video was a deepfake. Participants were then asked questions about the person’s guilt, to examine the influence of the video’s content. We found that most participants relied on the content of a deepfake video, even when they had been explicitly warned beforehand that it was fake. This result was observed even with participants who indicated that they believed the warning and knew the video to be fake. Our findings suggest that transparency is insufficient to entirely negate the influence of deepfake videos, which has implications for legislators, policy makers, and regulators of online content.