Assessing ChatGPT-4 as a Clinical Decision Support Tool in Neuro-Oncology Radiotherapy: A Prospective Comparative Study

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background and purpose Large language models (LLMs) such as ChatGPT-4 have shown potential for medical decision support, but their reliability in specialized fields remains uncertain. This study aimed to evaluate ChatGPT-4’s performance as a clinical decision support tool in neuro-oncology radiotherapy by comparing its treatment recommendations for patients with central nervous system tumors against a multidisciplinary tumor board’s decisions, an independent specialist’s opinion, and published guidelines. Materials and methods We prospectively collected 101 neuro-oncology cases (May 2024–May 2025) presented at a tertiary-care tumor board. Key case details were entered into ChatGPT-4 with a standardized query asking whether to recommend radiotherapy and, if so, the target volumes and dose. The AI’s recommendations were recorded and compared to the tumor board’s consensus, a blinded radiation oncologist’s recommendation, and ESMO guideline indications when applicable. Concordance rates (percentage agreement) and Cohen’s kappa were calculated. Sensitivity and specificity were assessed using the reference decisions as ground truth. McNemar’s test was used to evaluate any bias in discordant recommendations. Results ChatGPT-4 matched the tumor board’s radiotherapy recommendations in 76% of cases (κ = 0.61). Agreement with the independent specialist was 79% (κ = 0.58). In 61 low-complexity cases with clear guidelines, ChatGPT-4 concurred with guideline-based indications in 76.7% of cases, missing some recommended treatments (sensitivity ~ 73%, specificity 100%). In intermediate-complexity scenarios, concordance with the tumor board was ~ 70.8%, with most discrepancies due to the AI recommending treatment that experts did not (sensitivity ~ 85.7%, specificity 64.7%). In high-complexity cases, agreement was 90.9% (sensitivity 100%, specificity 83.3%). Overall, ChatGPT-4 showed an overtreatment bias , more often recommending radiotherapy when the human experts chose observation (p < 0.05 for AI vs. tumor board discordances). Its overall agreement (76%) was lower than that of the human specialist (90%). Conclusion ChatGPT-4 can reproduce many expert radiotherapy decisions in neuro-oncology, reflecting substantial absorption of standard clinical practice. However, it cannot substitute for human judgment: the AI omitted some indicated treatments in straightforward cases and suggested unnecessary therapy in some borderline cases, indicating a lack of nuanced clinical reasoning. Careful human oversight is essential if such models are to be used for clinical decision support.

Article activity feed