LLMs and Meaning: what the current semantic challenges in LLMs highlight about Natural Language
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Recent advances in large language models (LLMs) have led to increasingly strong claims abouttheir human-level linguistic competence and, by extension, their relevance for understanding naturallanguage and cognition. While a growing interdisciplinary literature has used behavioural tests andpsychometric tools to assess LLM performance, particularly in linguistics, such comparisons raisesubstantial conceptual and methodological concerns. This paper examines what current semanticlimitations in LLMs reveal about the nature of meaning in natural language. Drawing on empiricalfindings from psycholinguistics, the analysis shows that, despite fluent surface-level performance,LLMs systematically lack core aspects of semantic competence, including real-world grounding,communicative intent, and stable grammatical judgment. These limitations are then situated withinbroader theoretical frameworks from the philosophy of cognitive science, focusing on Miracchi’sreformulation of the Frame Problem and the Relevance Realisation framework. Together, theseapproaches highlight the central role of embodied agency, environmental embeddedness, and socio-cultural context in human meaning-making, and clarify why current computational architecturesstruggle with semantic relevance. The paper concludes by arguing for a bio-cultural conception oflanguage, suggesting that the semantic gaps observed in LLMs are not merely technicalshortcomings but rather reflect fundamental differences between algorithmic systems and human linguisticcognition.