Epistemic Field Theory: Predicting and Governing Hallucination in Large Language Models via Multi-Model Consensus

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models hallucinate at rates that undermine reliability in high-stakes applications. Mitigation strategies based on repeated sampling or majority voting implicitly assume error independence across samples. We introduce Epistemic Field Theory (EFT), which predicts hallucination probability from multi-model consensus. EFT defines a consensus field σ ∈ [0, 1] over query space and derives the predictor P(H) = (1 − σ)·η, where η is a model-specific noise coefficient. Across 13,728 human-validated responses (Cohen’s κ = 0.87) from four frontier models in three professional domains, σ predicts hallucination with AUC = 0.787, outperforming majority voting (0.518), SelfCheck methods (0.358–0.377), and self-reported confidence (0.461). Hallucination counts exhibit systematic overdispersion (ρ = 1.50), with empirical majority-failure rates 2.96× higher than independence predicts. Epistemic grounding reduces hallucination rates but not error correlation, revealing frequency and structure as independent dimensions of the hallucination problem.

Article activity feed