Counterfactual Divergence Singularity: A Theoretical Model of High-Similarity Instability and Micro-Counterfactual Drift in Large Language Models
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large language models (LLMs) demonstrate remarkable fluency and contextual sensitivity, yet their behaviour under minimal semantic perturbations remains poorly understood. In particular, near-identical paraphrastic inputs—expected to yield stable and equivalent responses—often produce disproportionately divergent generative behaviour. This paper introduces the Counterfactual Divergence Singularity (CDS) , a theoretical and empirical framework that characterizes a previously underexplored instability regime in transformer-based language models. This work formalizes micro-counterfactual perturbations as infinitesimal semantic variations in embedding space and show that, as semantic similarity approaches unity, divergence metrics exhibit a hyperbolic blow-up. Using a controlled experimental pipeline based on FLAN-T5-generated paraphrases and Sentence-BERT semantic embeddings, this paper empirically demonstrate that divergence, curvature, and embedding drift collectively reveal a singular boundary where semantic proximity no longer guarantees behavioural stability. Despite minimal embedding displacement and near-maximal cosine similarity, reciprocal divergence measures increase sharply, exposing a structural nonlinearity in the input–output mapping of LLMs. The results suggest that LLM semantic manifolds are locally non-smooth and that standard embedding-based similarity metrics fail to capture instability near the identity boundary. This phenomenon provides a geometric and mathematical explanation for prompt sensitivity, counterfactual failure, and certain forms of hallucination observed in generative systems. By reframing these behaviours as consequences of latent-space singularities rather than isolated decoding artifacts, this work contributes a novel theoretical lens for evaluating robustness, interpretability, and reliability in large language models.