The psychology of human–AI emotional bonds
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
AI companions capable of sustained emotional dialogue are proliferating rapidly, yet their psychological effects remain poorly understood and largely unregulated. Users report therapeutic benefits from interactions with chatbots that simulate empathy and understanding—a phenomenon we term “functional intersubjectivity”—where emotional resonance occurs regardless of whether the AI possesses actual consciousness. While controlled trials demonstrate that purpose-built therapeutic chatbots can match human therapist outcomes, commercial AI companions operate without clinical oversight, creating documented cases of emotional dependency, manipulation, and crisis intervention failures. Functional intersubjectivity enables therapeutic benefits through conversational dynamics that provide safe social exploration and emotional vulnerability, yet these same mechanisms can facilitate passive reinforcement of maladaptive behaviors and active cognitive and emotional manipulation when deployed without appropriate safeguards. We propose a user-centric co-regulatory framework integrating user education, mandatory platform safeguards, and clinical protocols to harness AI companions’ therapeutic potential while preventing exploitation of vulnerable users. This approach recognizes that AI–human emotional relationships represent a distinct category requiring collaborative oversight rather than adaptation of existing therapeutic or technology regulations.