The (In)Efficacy of AI Personas in Deception Detection Experiments

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Apart from recent exceptions, decades of research suggest humans perform poorly in deception detection studies. Accuracy is only slightly better than chance, and humans are often truth-biased. We report 12 studies using gemini-1.5-flash, accessed through the Viewpoints.ai research platform, making veracity judgments of humans, systematically varying the nature and duration of the communication, modality, the truth-lie base rate, and AI persona. AI performed best (57.5%) when detecting truths and lies involving feelings about friends, although it was notably truth-biased (71.7%). However, in assessing cheating interrogations, AI was lie-biased by judging more than three-quarters of interviewees as cheating liars. In assessing interviews where humans perform at rates over 70%, AI performed below chance, and accuracy plummeted to 15.9% with an ecological base-rate. AI yielded results different than prior human studies and exhibited a strong guilt bias in transgression denials. We presently advocate against using certain large language models for lie detection.

Article activity feed