Not an Illusion but a Manifestation: Understanding Large Language Model Reasoning Limitations Through Dual-Process Theory
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The characterization of Large Reasoning Models (LRMs) as exhibiting an “illusion of thinking” has recently emerged in the literature, sparking widespread public discourse. Some have suggested these manifestations represent bugs requiring fixes. I challenge this interpretation by reframing LRM behavior through dual-process theory from cognitive psychology. I draw on more than half a century of research on human cognitive effort and disengagement. The observed patterns include performance collapse at high complexity and counterintuitive reduction in reasoning effort. These appear to align with human cognitive phenomena, particularly System 2 engagement and disengagement under cognitive load. Rather than representing technical limitations, these behaviors likely manifest computational processes analogous to human cognitive constraints. In other words, they represent not a bug but a feature of bounded rational systems. I propose empirically testable hypotheses comparing LRM token patterns with human pupillometry data. I suggest that computational “rest” periods may restore reasoning performance, paralleling human cognitive recovery mechanisms. This reframing indicates that LRM limitations may reflect bounded rationality rather than fundamental reasoning failures. Accordingly, this article is presented as a hypothesis paper: it collates six decades of cognitive effort research and invites the scientific community to subject the dual-process predictions to empirical tests through coordinated human–AI experiments.