Artificial Generative Intelligence (AGI) as the End of Theory: From Explanation to Simulation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The advent of Artificial General Intelligence (AGI) poses a fundamental challenge regarding the nature of scientific inquiry: what happens to theory when machines cease to pursue understanding and focus solely on prediction? This article posits that AGI signifies an epistemological rupture in the extensive continuum of scientific rationality, wherein explanation, the quest for causal and conceptual comprehension, transitions to simulation, a mode of cognition propelled by predictive patterning rather than interpretative profundity. The paper engages figures from the philosophy of science and philosophical hermeneutics, including Popper, Kuhn, Hempel, and Lakatos, in conversation with Gadamer, Ricoeur, and Heidegger, to investigate how AGI redefines the concept of “knowledge.” Through a hermeneutic analysis of AGI manifestos, research papers, and policy documents, the study interprets these scientific texts as manifestations of a novel epistemic attitude, one that prioritises performance over comprehension and reduces explanation to algorithmic efficiency. AGI's simulations have unparalleled predictive accuracy, but they jeopardise the foundational objectives of science: justification, clarity, and accountability. The article finishes by proposing a hermeneutic philosophy of AGI that reinstates the significance of interpretation and meaning within automated knowledge systems. Instead of seeing AGI as the end of theory, it is shown as a critical mirror that science must look into to rethink what it means to "know." In a time when data may move faster than understanding, philosophy's job is to keep the space of interpretation open to remind us that knowledge is not just about what works, but also about what makes sense.