Reducing the Impact of AI-Generated Misinformation on Memory and Perception of a Police Incident
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Generative Artificial Intelligence (AI) systems are increasingly used to generate police-incident reports. Yet such systems can produce errors, which may detrimentally impact legal processes. The current study examined the impact of an AI-generated report on memory and perceptions of a police incident, and whether safeguards can reduce those impacts. Participants (N = 432) read an accurate summary of a domestic-violence incident; 24 hours later, they read an AI-generated report that was either accurate or contained misinformation, before completing memory tests and rating their perceptions of the incident. Two safeguards were tested: (1) active error-monitoring, and (2) watching body-worn camera footage. Misinformation significantly impaired memory and influenced perceptions of the incident—especially in participants who strongly identified with police—and the safeguards were largely effective at reducing these impacts. Findings demonstrate the risks of generative-AI usage in forensic settings, requiring the implementation of appropriate safeguards for combatting its detrimental effects.