Pragmatic Competence Without Embodiment? Evaluating LLM Performance on Implicature, Presupposition, and Speech Acts
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Pragmatic competence, our ability to infer implied meaning, recognize presuppositions, and interpret speech acts, has long been viewed as a uniquely human capacity grounded in embodied experience and social interaction. With the rapid rise of large language models, however, questions have emerged about whether disembodied systems can approximate these abilities. This study examines human and LLM performance across three core pragmatic domains: conversational implicature, presupposition, and speech acts. Using a controlled set of sixty stimuli evaluated by human participants and a state-of-the-art LLM, the study compares accuracy, error patterns, and interpretive tendencies across groups. Results show that while the model handles some conventionalized pragmatic cues successfully, it consistently falls short of human performance, particularly in tasks requiring contextual inference, accommodation, or recognition of indirect illocutionary force. Error analyses reveal systematic tendencies toward literal interpretation, failed or excessive presupposition accommodation, and difficulty identifying social or interpersonal dimensions of speech acts. These findings reinforce theoretical claims that pragmatic competence depends on embodied cognition and social grounding, and they highlight the limitations of current LLMs in communicative contexts requiring subtle or intention-based reasoning. The study concludes by discussing the implications of these limitations for AI deployment, language education, and the future development of more pragmatically aware artificial systems.