Implementing Affective Sovereignty: Formal Foundations, Design Architecture, and Computational Governance
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Emotional artificial intelligence (AI)—systems that infer, simulate, or influence human feelings— create ethical risks that existing frameworks of privacy, transparency, and oversight cannot fully address. This paper advances the concept of Affective Sovereignty: the right of individuals to remain the ultimate interpreters of their own emotions. We make four contributions. First, we develop formal foundations by decomposing risk functions to capture interpretive override as a measurable cost. Second, we propose a Sovereign-by-Design architecture that embeds safeguards and contestability into the machine learning lifecycle. Third, we operationalize sovereignty through new metrics—the Interpretive Override Score (IOS), After-correction Misalignment Rate (AMR), and Affective Divergence (AD)—and demonstrate their use in a proof-of-concept simulation. Fourth, we link technical design to governance by introducing the Affective Sovereignty Contract (ASC), a machine-readable policy layer, and by issuing a Declaration of Affective Sovereignty as a normative anchor for regulation. Together, these elements offer a computational framework for aligning emotional AI with human dignity and autonomy, moving beyond abstract principles toward enforceable, testable standards.