Inability to detect deepfakes: Deepfake detection training improves detection accuracy, but increases emotional distress and reduces self-efficacy

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Deepfakes are media content generated by an artificial intelligence with the intention to be perceived as real. Deepfakes become an increasing risk for society due to being widespread and effective deceptiveness. Although several strategies have been developed to increase human deepfake detection performance, research using current technology is yet sparse. In addition, the effects of deepfake exposure on personal distress are understudied. In this experiment, a training group (n = 48) underwent a feedback-based deepfake detection training and improved accuracy by around 20%, while a control group without feedback (n = 48) did not. Furthermore, the training group reported a significant increase in emotional arousal, decrease in emotional valence, increase in anxiety towards AI misuse, increase in negative affect, and decrease in self-efficacy, compared to the control group. Exploratory analysis on participants’ reported detection strategies indicates that “naïve” participants are tricked by deceptive features in deepfakes that trained participants can become aware of. The results suggest that while feedback-based training is effective in increasing deepfake detection, the confrontation with one’s inability to detect deepfake may cause emotional distress.

Article activity feed