Computational Scapegoats: From Mimetic to Alienated Desire in the Production of Large Language Models

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The prevailing paradigm for training models to perform intelligence is mimetic: the copying of human patterns to produce unsurprising or low perplexity samples of language and other media. At its limits the imitative paradigm appears destined to produce theoretical as much as practical contradictions. For example, the very desire to build an AI appears connected to human traits that seem difficult to replicate, as it stems from an organisation of subjectivity that includes a sense of lack or deficiency which, in its reproduction, would defeat certain stated purposes of AI such as the realisation of general or super intelligence. To explore such dilemmas, this article begins by exploring Girard’s idea of mimetic desire, itself philosophically influential within parts of the AI community, and how this relates to the production of an artificial subjectivity. It then switches to examine how AI is framed fetishistically – as an object upon which human desire is projected and invested. Finally, it works through theorisations of alienation, and posits an interpretation of AI as both alien and alienated as a useful conceptual alternative to the pure pursuit of human-like computational agents. The article concludes with speculation about the possibility of symbiotic pedagogy: the side-by-side juxtaposition of human and machine learning, without expectation of mimetic convergence between the two.

Article activity feed