Hebbian Inertia and Massless Reasoning: Comparative Cognitive Architecture in Human and Large Language Model Systems
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Purpose Human societies and artificial intelligence models learn in fundamentally different ways. Humans update beliefs through repeated exposure, a Hebbian process that creates cognitive “inertia”: once reinforced through experience, beliefs resist change. Large language models (LLMs), by contrast, do not update their parameters at inference; they pivot instantly between perspectives, unburdened by cumulative experience. Despite growing interest in comparing human and machine cognition, no prior study has placed both on a shared measurement scale that permits direct quantitative comparison. This paper reports the first such comparison, examining whether the architectural differences between human and LLM cognition produce measurably distinct signatures in belief structure and change. Design/methodology/approach We employ the Galileo Method, a multidimensional scaling technique that derives spatial representations of concepts from paired-comparison judgments of dissimilarity. Unlike conventional approaches that impose Euclidean geometry on psychological data, the Galileo Method preserves measured distances in their native metric, allowing non-Euclidean structures to emerge when warranted by the data. Participants - both human respondents and three LLM systems (Claude, DeepSeek, and ChatGPT-5) - provided dissimilarity judgments for identical concept sets. These judgments were then projected into coordinate spaces, enabling comparison of distances, clustering patterns, and response to belief-challenging information across agent types. Findings Human respondents display inertial pseudo-Riemannian signatures consistent with Hebbian learning: concepts reinforced through experience occupy stable positions that resist perturbation. LLMs, by contrast, exhibit what we term “massless” reasoning dynamics, repositioning concepts fluidly without the friction imposed by prior reinforcement, all arrayed in Euclidean space. These patterns held across all three LLM systems tested, suggesting they reflect architectural properties of transformer-based models rather than idiosyncrasies of particular implementations. Research implications The results point toward a research program in comparative cognitive architecture. By placing human and machine cognition on shared measurement frameworks, researchers can move beyond impressionistic comparisons to systematic analysis of how different learning architectures shape the structure and malleability of belief. Originality/value This study provides the first direct human–LLM comparison using a shared, geometrically unconstrained measurement scale; demonstrates that Hebbian inertia leaves detectable signatures absent in LLM cognition; and establishes a methodological template for future comparative work.