The Role of Prompt Engineering for Multimodal LLM Glaucoma Diagnosis

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background and Aim

This study evaluates the diagnostic performance of multimodal large language models (LLMs), GPT-4o and Claude Sonnet 3.5, in detecting glaucoma from fundus images. We specifically assess the impact of prompt engineering and the use of reference images on model performance.

Methods

We utilized the ACRIMA public dataset, comprising 705 labeled fundus images, and designed four prompt types, ranging from simple instructions to more refined prompts with reference images. The two model were tested across 5640 API runs, with accuracy, sensitivity, specificity, PPV, and NPV assessed through non-parametric statistical tests.

Results

Claude Sonnet 3.5 achieved a highest sensitivity of 94.92%, a specificity of 73.46%, and F1 score of 0.726. GPT-4o reached a highest sensitivity of 81.47%, a specificity of 50.49%, and F1 score of 0.645. The incorporation of prompt engineering and reference images improved GPT-4o’s accuracy by 39.8% and Claude Sonnet 3.5’s by 64.2%, significantly enhancing both models’ performance.

Conclusion

Multimodal LLMs demonstrated potential in diagnosing glaucoma, with Claude Sonnet 3.5 achieving a sensitivity of 94.92%, far exceeding the 22% sensitivity reported for primary care physicians in the literature. Prompt engineering, especially with reference images, significantly improved diagnostic performance. As LLMs become more integrated into medical practice, efficient prompt design may be key, and training doctors to use these tools effectively could enhance clinical outcomes.

Article activity feed