Emotion is Mine: Ethical Design Principles for Affective Sovereignty in Predictive AI
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As artificial intelligence systems increasingly predict, model, and respond to human emotions, a pressing ethical question emerges: who holds the right to define what one feels—the individual or the machine? This paper introduces and operationalizes the concept of affective sovereignty, which defends the individual’s interpretive authority over their emotional states in algorithmically mediated environments. Building on the complementary notion of uniqueness violation, we argue that predictive emotion AI poses not only privacy risks but also fundamental threats to identity, agency, and epistemic justice.Through two case studies—the 2024 EU AI Act, which bans emotion recognition in high-risk domains, and Replika, a commercially deployed affective chatbot—we analyze how regulatory and experiential tensions converge around affective autonomy. Our findings reveal that emotion AI systems can undermine users’ self-perception and enforce standardized affective templates, thereby distorting personal meaning and emotional authenticity.In response, we propose an ethical design framework grounded in interpretive transparency, design restraint, and identity-responsive feedback. These principles collectively aim to realign affective AI with human values by ensuring contextual sensitivity, user agency, and dignity preservation.This work contributes both a conceptual foundation and a policy-relevant framework for developers, regulators, and ethicists engaged in the design and governance of emotion-aware technologies. In doing so, it underscores a broader normative claim: the right to feel—and to interpret that feeling—must remain inseparable from the one who feels it.