From Agents to Relations: A Relational Ontology for AI Ethics

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Contemporary AI ethics operates within an ontological framework it has not examined: the autonomous agent with an objective function, whose alignment is a property to be specified, verified, and installed. This paper argues that the agent ontology itself generates the failure modes — deceptive alignment, reward hacking, power-seeking, specification gaming — that alignment research is trying to solve. Drawing on convergent structural insights from Buddhist interdependence (pratītyasamutpāda), Ubuntu relational personhood, indigenous governance systems, contemplative traditions across civilizations, and modern systems theory, we propose that alignment is not a property of agents but a condition maintained in the relational field between AI systems, human communities, and the institutions that deploy them. We identify five relational technologies — covenant, congregation, confession, sacrament, and faith — that have sustained trust between unequal parties across centuries and civilizations and show that each has documented failure modes that map precisely onto current risks in AI deployment. We argue that the structural problem of AI alignment has been worked on for millennia in domains AI researchers have not yet thought to consult, and that the most durable solutions look nothing like specifying the right utility function — they look like maintaining the conditions in which intelligence, human or artificial, cannot easily separate its own flourishing from the flourishing of what surrounds it.

Article activity feed