Companions Made of Code: Why Emotional AI Must Not Be Introduced into Mental Healthcare Without Regulation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial intelligence systems that offer emotional companionship have moved rapidly into everyday life. Marketed as friends, partners and listeners, they now meet users in moments of loneliness, stress and psychological distress, often when no human support is available. Their spread raises a central socio-technical question: what happens when emotional suffering is directed towards an artefact that cannot act, cannot assume responsibility and cannot share moral burden? This article argues that emotional-support artificial intelligence must not be introduced into mental-health contexts without robust regulation, explicit clinical governance and prior guarantees of equitable access to human care. The paper combines a normative analysis, grounded in relational autonomy, justice and care ethics, with an exploratory examination of eight widely available chatbots tested with clinically relevant distress prompts. The analysis shows that systems frequently simulate empathy while failing to recognise suicide-risk cues or guide users towards human help, reinforcing the risk that conversation may replace care. Placed alongside documented real-world cases in which chatbot interactions preceded self-harm and suicide, these findings support a broader claim about AI and society. Emotional AI is emerging as part of a social infrastructure of mental health at a moment when services remain unequal and under-resourced. In such conditions, its deployment risks entrenching structural abandonment behind a linguistic façade of support. Emotional AI may one day have a place as a carefully supervised adjunct. For now, ethical legitimacy requires that societies first repair mental-health provision, establish accountability for digital systems and ensure that artificial companions remain genuinely optional rather than structurally inevitable.

Article activity feed