In-context learning for data-efficient classification of diabetic retinopathy with multimodal foundation models
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Importance
In-context learning, a prompt-based learning mechanism that enables multimodal foundation models to adapt to new tasks, can eliminate the need for retraining or large annotated datasets. We use diabetic retinopathy detection as an exemplar to probe in-context learning for ophthalmology.
Objective
To evaluate whether in-context learning using a multimodal foundation model (Google Gemini 1.5 Pro) can match the performance of a domain-specific model (RETFound) fine-tuned for diabetic retinopathy detection from color fundus photographs.
Design/Setting/Participants
This cross-sectional study compared two approaches for adapting foundation models to diabetic retinopathy detection using a public dataset of 516 color fundus photographs. The images were dichotomized into two groups based on the presence or absence of any signs of diabetic retinopathy. RETFound was fine-tuned for this binary classification task, while Gemini 1.5 Pro was assessed for it under zero-shot and few-shot prompting scenarios, with the latter incorporating random or k-nearest-neighbors-based sampling of a varying number of example images. For experiments, data were partitioned into training, validation, and test sets in a stratified manner, with the process repeated for 10-fold cross-validation.
Main Outcome(s) and Measure(s)
Performance was assessed via accuracy, F1 score, and expected calibration error of predictive probabilities. Statistical significance was evaluated using Wilcoxon tests.
Results
The best in-context learning performance with Gemini 1.5 Pro yielded an average accuracy of 0.841 (95% CI: 0.803–0.879), F1 score of 0.876 (95% CI: 0.844–0.909), and calibration error of 0.129 (95% CI: 0.107–0.152). RETFound achieved an average accuracy of 0.849 (95% CI: 0.813–0.885), F1 score of 0.883 (95% CI: 0.852–0.915), and calibration error of 0.081 (95% CI: 0.066–0.097). While accuracy and F1 scores were comparable (p>0.3), RETFound’s calibration was superior (p=0.004).
Conclusions and Relevance
Gemini 1.5 Pro with in-context learning demonstrated performance comparable to RETFound for binary diabetic retinopathy detection, illustrating how future medical artificial intelligence systems may build upon such frontier models rather than being bespoke solutions.
Key Points
Question
Can in-context learning using a general-purpose foundation model (Gemini 1.5 Pro) achieve performance comparable to a domain-specific model (RETFound) for binary diabetic retinopathy detection from color fundus photographs?
Findings
In this cross-sectional study, Gemini 1.5 Pro demonstrated accuracy and F1 scores comparable to the fine-tuned RETFound model. While classification performance was similar, RETFound showed better calibration.
Meaning
In-context learning with general-purpose foundation models like Gemini 1.5 Pro offers a promising, accessible approach for diabetic retinopathy detection, potentially enabling broader clinical adoption of advanced AI tools without the need for retraining or large labeled datasets.