Understanding and Regulating Advanced Emotional Capabilities in Artificial Intelligence (AI) Systems.
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Recent empirical evidence indicates that Artificial intelligence (AI) systems are achieving, and in some cases surpassing, human-equivalent socioemotional skills across various modalities [1–3]. For instance, models such as GPT-3.5 and GPT-4 have demonstrated high performance in text-based emotional awareness and adaptability [4]. In visual tasks, while GPT-4 matched human performance levels, GPT-4o rapidly surpassed human norms across diverse datasets [3, 4, 5]. Furthermore, these capabilities extend to complex multimodal social cognition, exemplified by Gemini 1.5 Pro, which has exceeded human benchmarks through integrated audiovisual analysis [6].Emulating the understanding of emotion and cognition enables AI to exploit a profound vulnerability inherent to the human brain - its constant search for communication partners and sensitivity to agents perceived as having similar thoughts and feelings. Understanding another agent’s mental states and feeling understood by them creates affinity, activating the biological systems underpinning attachment relationships [7]. Until the emergence of AI, this mechanism provided clear selective advantages, enhancing social communication, identity formation, and interpersonal bonding. However, an artificial agent capable of effectively emulating these evolved socioemotional functions now presents a substantial risk.