Human-AI Relationships and Their Therapeutic Implications

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

AI companions are increasingly integrated into individuals' emotional lives, necessitating that clinicians understand these human-AI relationships. Unlike task-oriented chatbots, AI companions focus on emotional bonds through personalisation and memory, simulating human connection. A friendship with an AI chatbot is a parasocial relationship, characterised by emotional investment in a non-reciprocal entity providing "frictionless" interactions. While loneliness is a primary driver for seeking AI companionship, these systems are not designed for mental health support and can pose risks like failing to identify distress, expressing stigma, or even facilitating suicidal ideation.The clinical concern is whether these frictionless AI relationships displace human connection and inadvertently exacerbate existing psychopathology. Clinicians must assess the function of AI relationships and understand the conditioning cycle.Implications for practice include integrating AI companion questions into assessments, formulating their role in the client's psychological world, leveraging the human therapeutic alliance, intervening to build real-world social skills, and providing ethical guidance on AI mental health tools. Clinicians must support clients in navigating this landscape, ensuring technology serves, not supplants, authentic human relationships.

Article activity feed