Agents Don't Need a Better Brain -- They Need a World

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Work on AI safety and alignment has largely focused on improving the behavior of individual models. That emphasis is necessary, but it is incomplete for the governance of autonomous agents operating across open, multi-agent, and institutionally significant environments. This paper advances the complementary thesis that many important risks in such environments are infrastructural rather than purely model-internal. Problems such as identity spoofing, opaque delegation, unauthorized action chains, weak auditability, and unresolved inter-agent conflict arise not only from insufficient alignment, but from the absence of shared institutional primitives. We present the Digital Citizenship Protocol for AI (DCP-AI), a layered governance architecture intended to supply those primitives. DCP-AI combines cryptographically verifiable identity, machine-readable intent declaration, tamper-evident audit trails, authenticated agent-to-agent communication, lifecycle governance, procedural accountability, and delegated representation into a unified protocol stack. We map the framework to documented categories of autonomous-agent failure and situate it relative to emerging regulatory and standards-oriented efforts.

Article activity feed