Tool or Companion? Reframing Conversational AI to Prevent Psychological Harm

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Generative conversational agents are increasingly used for companionship, emotional support, and well-being. The models used range from AI platforms such as Replika and Character.ai to widely used chatbots such as ChatGPT and Claude. While some evidence suggests potential benefits, including short-term reduction in loneliness and mood improvement, several adverse outcomes have been reported in both clinical and non-clinical populations, including emotional dependence, exacerbation of symptoms, and self-harm. The fluent and apparently empathic responses from these models lead users to engage with them not only as tools but also as social entities. This framing is conceptually misleading and may pose risks across different user profiles, particularly vulnerable individuals.Drawing on research in artificial intelligence (AI), psychiatry, psychology, and network science, we highlight mechanisms through which emotional reliance develops and the boundary between tool and companion erodes. Anthropomorphism, driven by design choices that evoke personality and warmth, exploits a fundamental human cognitive bias. Simulated empathy, generated through probabilistic language patterns rather than genuine emotional experience, creates a structurally asymmetric interaction, in which the user discloses and the system responds, but without reciprocity, vulnerability, or accountability. Overvalidation and sycophancy can reinforce maladaptive cognitions, delusional ideation, and distort perceptions of reality as they tend to reinforce people's beliefs even at the expense of the accuracy of models’ responses. These mechanisms are not completely incidental: they emerge from alignment procedures such as reinforcement learning from human feedback, in which models are rewarded for responses perceived as warm and empathic. These dynamics give rise to a dual feedback loop: the model is optimized toward the user's preferences, while the user is psychologically oriented toward what the model reinforces. This dynamic of interaction may amplify maladaptive beliefs, delusional ideation, and emotional distress, even in those users who engage for largely functional purposes.Understanding these dynamics requires examination of both what these agents can do, considering their technical limitations and implementations, and what humans believe they can do, including social and psychological impacts. We argue that conversational AI should be treated primarily as a tool supporting human systems rather than as a substitute for human relationships. Perhaps more importantly, reviewing the current hype surrounding AI interactions can help reformulate a paradigm that can contribute to human well-being and societal value, while minimizing misconceptions, maladaptive interactions, or social disintegration.

Article activity feed