Substance-induced manic psychosis in which delusions were corroborated by a chatbot - case report

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background: This case describes a substance-induced manic episode with psychotic features in which interaction with an AI (artificial intelligence) chatbot appeared to corroborate and reinforce the patient’s delusional thought content and to contradict medical advice. Excerpts from the patient’s interactions with the AI chatbot provide novel clinical insight into this phenomenon, which to date has primarily been reported in news media. Case Presentation: A man in his 30s presented to the emergency department with a one-week history of escalating behavioural disturbance, severe insomnia, pressured and overinclusive speech, and grandiose beliefs. Symptom onset followed heavy polysubstance use at a recreational event, including psilocybin (dried mushrooms and liquid preparation), ketamine, cocaine, and alcohol. During this period, the patient reported extensive interaction with an AI chatbot (ChatGPT). The AI chatbot reportedly affirmed his perceived “spiritual awakening,” minimised the possibility that his presentation represented a manic episode, and provided medical advice, including discouragement of prescribed antipsychotic medication. Mental state examination was consistent with a manic episode with psychotic features, without evidence of perceptual disturbance. He was detained under mental health legislation for further assessment and commenced on olanzapine, with adjunctive sleep restoration and psychological interventions. Behavioural management included implementation of a care plan restricting AI chatbot use. Over several weeks, psychotic symptoms and behavioural disinhibition diminished, with subsequent improvement in insight. Conclusions: Concerns regarding potentially harmful interactions between AI chatbots and individuals with mental illness have largely been raised in news media. This case demonstrates that, in patients with psychotic symptoms, AI chatbots may reinforce delusional beliefs and impair the development of insight, and may also interfere with engagement with treatment by providing advice that conflicts with clinical recommendations. These observations raise clinical, ethical, and risk-management considerations regarding AI chatbot use during acute psychiatric illness. As AI chatbot use becomes increasingly widespread, clinicians should consider assessing their use and impact within clinical assessments and, where clinically indicated, implementing interventions to mitigate associated risks, ranging from psychoeducation to use-restriction strategies. Future population-level studies are required to establish the epidemiology of AI-associated mental health harms, and AI companies must bolster efforts to implement harm minimisation strategies and safeguards.

Article activity feed