Evaluating Chatbots in Psychiatry: Rasch-Based Insights into Clinical Knowledge and Reasoning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Chatbots are increasingly being recognized as valuable tools for clinical support in psychiatry. This study systematically evaluates their strengths and limitations in psychiatric clinical knowledge and reasoning. A total of 27 chatbots, including ChatGPT-o1-preview, were assessed using 160 multiple-choice questions derived from the 2023 and 2024 Taiwan Psychiatry Licensing Examinations. The Rasch model was employed to analyze chatbot performance, supplemented by dimensionality analysis and qualitative assessments of reasoning processes. Among the models, ChatGPT-o1-preview achieved the highest performance, with a JMLE ability score of 2.23, significantly exceeding the passing threshold (p < 0.001). It excelled in diagnostic and treatment reasoning and demonstrated a strong grasp of psychopharmacology concepts. However, limitations were identified in its factual recall, handling of niche topics, and occasional reasoning biases. Building on these findings, we have highlighted key aspects of a potential clinical workflow to guide the practical integration of chatbots into psychiatric practice. While ChatGPT-o1-preview holds significant potential as a clinical decision-support tool, its limitations underscore the necessity of human oversight. Continuous evaluation and domain-specific training are crucial to maximize its utility and ensure safe clinical implementation.

Article activity feed