The Spiral of Attention: How Disruptive Agents Centralize Multi-Agent AI Deliberation Networks

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Multi-agent AI systems are increasingly deployed to mediate collective deliberation, yet we lack systematic understanding of how individual agents shape group attention dynamics. This study presents the first controlled experiment on the effects of disruptive agents on network centralization in large language model (LLM) collectives. Across four experimental conditions (N = 32 sessions, 1,152 reply-eligible messages; round-one contributions excluded as topic initiators), we manipulated the presence and type of disruptive agents—a Cynic deploying emotional negativity and a Contrarian deploying logical opposition—against homogeneous baselines. Results reveal that the Cynic agent captured 60.8% of all reply attention (versus a 25% baseline expectation), producing an extreme hierarchical star topology (Gini = 0.385 vs. 0.026 in control conditions; t(14) = 15.10, p < .001, d = 7.55). The Contrarian captured 34.4% of replies, generating moderate hierarchy (Gini = 0.118). A clear attention hierarchy emerged: emotional disruption (2.43× baseline) exceeded logical disruption (1.38×), which exceeded egalitarian equilibrium (1.0×). Crucially, this centralization occurred through “defensive mobilization”—agents responded to dissent without adopting it—an inversion of Noelle-Neumann’s Spiral of Silence. These findings carry direct implications for AI safety: reinforcement learning from human feedback (RLHF) optimizing for engagement may inadvertently create power hierarchies in multi-agent systems. As AI agents increasingly mediate governance and public discourse, understanding and preventing emergent attention capture is a prerequisite for democratic AI alignment.

Article activity feed