Semantic Thermodynamics of Transformer Architectures: A Framework for Understanding Hallucination Constraints

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

We develop \emph{Semantic Thermodynamics}, an information-theoretic framework for analyzing hallucinations in transformer systems under finite resources. The central object is mutual information between latent facts and model outputs, together with Fano-style lower bounds on semantic error. We clarify the stochastic assumptions required for non-degenerate information measures, distinguish true data-generating uncertainty from model-implied uncertainty, and replace unsupported hard capacity formulas with explicit capacity surrogates tied to precision, context budget, and effective representational rank. Under standard identification assumptions, we derive a baseline bound \begin{equation*} H_R \geq \max\left\{0,\,1-\frac{I(F;Y)+1}{\log M}\right\}, \end{equation*} where $H_R$ is hallucination rate, $F$ is the latent semantic fact, $Y$ is model output, and $M$ is semantic cardinality. We also provide a distribution-dependent variant and a bottleneck-aware extension for retrieval-augmented generation. This paper contributes a mathematically consistent formulation, a tighter assumptions section, and concrete empirical protocols for estimation and falsification.

Article activity feed