Simulation Without Ground: Epistemic Collapse and Reflexive Ethics in Generative AI

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper adopts a Research Through Design (RTD) approach to investigate the epistemic consequences of large language models (LLMs). We introduce the concept of Simulacral Drift to describe how recursive training on synthetic outputs displaces empirical reference, leading to a collapse in both model fidelity and user cognition. Rather than collecting empirical data, this work engages in conceptual design research to surface and structure the ethical implications of generative simulation. Through a reflexive inquiry grounded in media theory, STS, and design philosophy, we propose a normative framework of Reflexive Epistemic Ethics, comprising referential integrity, simulation auditability, and epistemic friction. We further integrate recent neurophysiological evidence from Kosmyna et al. (2025), which shows that LLM users exhibit reduced memory recall and neural engagement — validating our claim that epistemic collapse is embodied and co-produced. By treating simulation as a socio-technical condition and knowledge frameworks as designed epistemic artifacts, this paper contributes to the growing body of RTD scholarship that seeks to anticipate, intervene in, and reshape the cognitive infrastructures of AI.

Article activity feed