Epistemic Closure and Falsifiability in AI-Mediated Self-Referential Systems

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The proliferation of complex conceptual systems developed in interaction with artificial intelligence agents poses an epistemological problem not anticipated by classical theories of falsification: in such systems, the external validation agent is simultaneously a structural generator of narrative coherence, inducing a functional collapse between the roles of creation and assessment. This collapse is not reducible to Popperian immunization or to the adjustment of auxiliary hypotheses in the Lakatosian sense, since it does not arise from deliberate defensive strategies but from an architectural asymmetry between the way such systems produce coherence and the way their human creators interpret it. This paper proposes the concept of epistemic delusion to designate the methodological state in which the operational conditions of falsification disappear as the cumulative effect of conceptual drift mechanisms, and argues that in AI-mediated self-referential systems this process exhibits a specific vector — systemic narrative induction — not yet systematized in the literature. The paper examines the mechanisms of conceptual drift, the modes of epistemic closure, and a set of methodological safeguards whose normative foundation is derived from the distinction between internally generated coherence and empirically independent corroboration.

Article activity feed