Comprehending semantic and syntactic anomalies in text attributed to an LLM versus a human: An ERP study
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As people increasingly interact with large language models (LLMs), a critical question emerges: do humans process language differently when communicating with an LLM versus another human? While there is good evidence that people adapt comprehension based on their expectations toward their interlocutor in human–human interaction, human–computer interaction research suggests the adaptation to machines is often suspended until expectation violation occurs. We conducted two event-related potential experiments examining Chinese sentence comprehension, measuring neural responses to semantic and syntactic anomalies attributed to an LLM or a human. Experiment 1 revealed reduced N400 but larger P600 responses to semantic anomalies in LLM-attributed text than human-attributed one, suggesting participants anticipated semantic errors yet required increased composition/integration efforts. Experiment 2 showed enhanced P600 responses to LLM-attributed than human-attributed syntactic anomalies, reflecting greater reanalysis or integration difficulty in the former than in the latter. Notably, neural responses to LLM-attributed semantic anomalies (but not syntactic anomalies) were further modulated by participants’ belief about humanlike knowledge in LLMs, with a larger N400 and a smaller P600 in participants with stronger belief of humanlike knowledge in LLMs. These findings provide the first neurocognitive evidence that people develop mental models of LLM capabilities and adapt neural processing accordingly, offering theoretical insights aligned with multidisciplinary frameworks and practical implications for designing effective human–AI communication systems.