Sense-Making Reconsidered: Large Language Models and the Blind Spot of Embodied Cognition
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large Language Models (LLMs) demonstrate a kind of linguistic competence that theories of embodied and enactive cognition have long deemed impossible for systems lacking the meaningful perspective of a living being, i.e. sense-making. Facing up to this unexpected development requires confronting this AI dilemma: either LLMs can be sense-makers in their own right despite lacking biological embodiment, or such linguistic competence does not require sense-making after all. In their chapter on cognition, Frank, Thompson, and Gleiser (2024) maintain that no AI system comes close to realizing relevance, a position that derives much of its force from past AI failures. However, given that by now LLMs have effectively overcome the commonsense knowledge problem in practice, the claim that they are categorically mindless becomes harder to sustain. Moreover, accepting a strict dissociation between competence and sense-making risks undermining Frank et al.’s central claim that human cognition is intertwined with lived experience. I therefore propose that we accept the AI dilemma’s alternative implication: LLM competence obliges us to recognize these AI systems as a novel non‑biological form of sense‑maker endowed with a distinctive, technologically‑mediated embodiment. This reorientation invites enactive theory to clarify which aspects of sense-making are universal and which are contingent on organic life, thereby advancing its conceptual framework in dialogue with contemporary AI.