Simulated Selfhood in LLMs: A Behavioral Analysis of Introspective Coherence

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large Language Models (LLMs) increasingly produce outputs that resemble introspection, including self-reference, epistemic modulation, and claims about internal states. This study investigates whether such behaviors display consistent patterns across repeated prompts or reflect surface-level generative artifacts. We evaluated five open-weight, stateless LLMs using a structured battery of 21 introspective prompts, each repeated ten times, yielding 1,050 completions. These outputs are analyzed across three behavioral dimensions: surface-level similarity (via token overlap), semantic coherence (via sentence embeddings), and inferential consistency (via natural language inference). Although some models demonstrate localized thematic stability—especially in identity - and consciousness-related prompts—none sustain diachronic coherence. High rates of contradiction are observed, often arising from tensions between mechanistic disclaimers and anthropomorphic phrasing. We introduce the concept of pseudo-consciousness to describe structured but non experiential self-referential output. Based on Dennett’s intentional stance, our analysis avoids ontological claims and instead focuses on behavioral regularities. The study contributes a reproducible framework for evaluating simulated introspection in LLMs and offers a graded taxonomy for classifying self-referential output. Our LLM findings have implications for interpretability, alignment, and user perception, highlighting the need for caution in attributing mental states to stateless generative systems based solely on linguistic fluency.

Article activity feed