CultureManip: A Benchmark for Mental Manipulation Detection Across Multilingual Settings
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large language models (LLMs) show significant performance gaps in detecting mental manipulation across languages, with particularly pronounced limitations in low-resource settings. Despite extensive research on multilingual LLMs, mental manipulation detection in non-English languages remains understudied. We introduce CultureManip, a multilingual benchmark for binary mental manipulation detection, evaluating ChatGPT-3.5 Turbo across four languages: English, Spanish, Chinese, and Tagalog. Using inter-annotator agreement scores (which measure how often the model's predictions align with human judgments), we reveal substantial performance degradation in non-English contexts. Human-LLM agreement drops from 48% in English to 41% in Spanish, 28% in Chinese, and just 20% in Tagalog, meaning the model disagrees with human annotators 80% of the time for the lowest-resource language. These results demonstrate a clear correlation between language resource availability and detection accuracy, highlighting critical challenges at the intersection of cultural context, linguistic structure, and manipulation identification. This work underscores the urgent need for culturally-aware, multilingual approaches to mental manipulation detection in AI systems.