Not an Illusion but a Manifestation: Understanding Large Language Model Reasoning Limitations Through Dual-Process Theory
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Recent work by Shojaee et al. (2025) characterizes Large Reasoning Models (LRMs) as exhibiting an "illusion of thinking." This study sparked widespread public discourse. Some suggested these manifestations represent bugs requiring fixes. I challenge this interpretation by reframing LRM behavior through dual-process theory from cognitive psychology. I draw on more than half a century of research on human cognitive effort and disengagement. The observed patterns include performance collapse at high complexity and counterintuitive reduction in reasoning effort. These appear to align with human cognitive phenomena, particularly System 2 engagement and disengagement under cognitive load. Rather than representing technical limitations, these behaviors likely manifest computational processes analogous to human cognitive constraints. In other words, they represent not a bug but a feature of bounded rational systems. I propose empirically testable hypotheses comparing LRM token patterns with human pupillometry data. I suggest computational "rest" periods may restore reasoning performance, paralleling human cognitive recovery mechanisms. This reframing indicates LRM limitations may reflect bounded rationality rather than fundamental reasoning failures.