Can artificial intelligence diagnose seizures based on patients’ descriptions? A study of GPT-4
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Introduction
Generalist large language models (LLMs) have shown diagnostic potential in various medical contexts. However, there has been little work on this topic in relation to epilepsy. This paper aims to test the performance of an LLM (OpenAI’s GPT-4) on the differential diagnosis of epileptic and functional/dissociative seizures (FDS) based on patients’ descriptions.
Methods
GPT-4 was asked to diagnose 41 cases of epilepsy (n=16) or FDS (n=25) based on transcripts of patients describing their symptoms. It was first asked to perform this task without being given any additional training examples (‘zero-shot’) before being asked to perform it having been given one, two, and three examples of each condition (one-, two, and three-shot). As a benchmark, three experienced neurologists were also asked to perform this task without access to any additional clinical information.
Results
In the zero-shot condition, GPT-4’s average balanced accuracy was 57% (κ: .15). Balanced accuracy improved in the one-shot condition (64%, κ: .27), though did not improve any further in the two-shot (62%, κ: .24) or three-shot (62%, κ: .23) conditions. Performance in all four conditions was worse than the average balanced accuracy of the experienced neurologists (71%, κ: .41).
Significance
Although its ‘raw’ performance was poor, GPT-4 showed noticeable improvement having been given just one example of a patient describing epilepsy and FDS. Giving two and three examples did not further improve performance, but more elaborate approaches (e.g. more refined prompt engineering, fine-tuning, or retrieval augmented generation) could unlock the full diagnostic potential of LLMs.