The Code of Society: Constructing Social Theory through Large Language Models
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In recent years, large language models (LLMs) such as GPT-4 have transcended their status as computational tools to emerge as collaborators in intellectual tasks such as theory construction, critique, and simulation. This paper investigates the evolving epistemic role of LLMs in social theory, arguing that they represent a paradigmatic shift: from modeling society to modeling thought about society itself. Drawing on textual simulations where LLMs replicate the argumentative styles of classical theorists such as Marx, Durkheim, and Weber, we explore how these technologies can not only preserve theoretical syntax but also generate novel syntheses and post-disciplinary dialogues.Introducing the concept of the meta-theorist machine—an artificial intelligence that engages reflexively with the production of theory—we critically examine the ability of LLMs to engage with core dimensions of social theory: normativity, reflexivity, and historicity. Can an LLM meaningfully simulate dialectical reasoning? Can it reflect on ideology, or merely reproduce inherited patterns of knowledge? Using a mixed-methods approach, including prompt-based simulations, critical AI analysis, and textual experimentation, the paper explores whether LLMs can be considered agents of theoretical production rather than passive assistants.Nevertheless, this emergent potential raises urgent epistemological and ethical questions. LLMs are trained on corpora that reproduce colonial, gendered, and ideological biases, thereby risking the reinforcement of dominant paradigms under the guise of novelty. We interrogate whether AI-generated theory can be steered toward decolonial, feminist, and post-humanist innovations, and whether LLMs can participate meaningfully in the pluralization of theoretical discourse.Finally, the paper examines the ontological boundary between tool and thinker: if LLMs engage in theory-building, how must concepts of authorship, intellectual labor, and epistemic agency be redefined? By positioning LLMs as emergent epistemic actors, we invite social theorists, AI developers, and epistemologists to collaboratively rethink the boundaries of theoretical practice in the twenty-first century. Rather than treating AI as a mere computational convenience, this paper argues for its recognition as a participant in the co-creation of future social theory.