Is this real? Susceptibility to deepfakes in machines and humans
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Deepfakes are synthetic media created by deep-generative methods to fake a person’s audio-visual representation. Growing sophistication of deepfake technology poses significant challenges for both machine learning (ML) algorithms and humans. Here we used real and deepfake static face images (Study 1) and dynamic videos (Study 2) (i) to investigate sources of misclassification errors in machines, (ii) to identify psychological mechanisms underlying detection performance in humans, and (iii) to compare humans and machines in their classification decision accuracy and confidence. Study 1 found that machines achieved excellent performance in classifying real and deepfake images, with good accuracy in feature classification. Humans, in contrast, experienced challenges in distinguishing between real and deepfake images. Their classification accuracy was at chance level, and this underperformance relative to machines was accompanied by a truth bias and low confidence for the detection of deepfake images. Using video stimuli, Study 2 found that performance of machines was near chance level, with poor feature classification. Further, the machines showed greater truth bias and low reduced decision confidence relative to humans who outperformed machines in the detection of video deepfakes. Finally, the study revealed that higher analytical thinking, lower positive affect, and greater internet skills were associated with better video deepfake detection in humans. Combined, findings across these two studies advance understanding of factors contributing to deepfake detection in both machines and humans and could inform intervention toward tackling the growing threat from deepfakes by identifying areas of particular benefit from human-AI collaboration to optimize deepfake detection.