Compact Vision–Language Models Enable Efficient and Interpretable Automated OCT Analysis Through Layer Specific Multimodal Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Translating the intricate anatomical signatures of retinal disease from OCT B-scans into clear, accurate clinical narratives demands AI models that seamlessly fuse visual features with domain expertise. We curated a multimodal dataset of 40,000 OCT B-scans from public repositories and private clinical cohorts, each paired with expert validated summaries spanning six conditions: diabetic macular edema, diabetic retinopathy, geographic atrophy, drusen, choroidal neovascularization, and healthy retina. We introduce LO-VLM, a compact (247M parameter) vision–language model (VLM) that infuses anatomical guidance into both encoder and decoder for free form summary generation and multiclass disease classification. Benchmarking against state-of-the-art RetinaVLM, LLaVA-Med, and a ViT vision only model demonstrates superior performance. In a blinded evaluation by three board certified retina specialists scored the generated summaries, LO-VLM narratives achieved mean = 8.5 (standard deviation = 1.15) out of 10, compared to mean = 5.5 (standard deviation = 1.13) for RetinaVLM (p < 0.0001). In quantitative evaluations, LO-VLM achieved an SBERT similarity of 0.803 and a BERTScore F1 of 0.715, representing improvements of 8.2% and 28.8% over specialized VLM baselines. For disease classification, LO-VLM reached 96% accuracy (F1 = 96%), outperforming ViT by 13% and exceeding medical VLM benchmarks by over 62%. By reconciling interpretability with computational efficiency, LO-VLM establishes a new paradigm for efficient AI models in OCT interpretation.

Article activity feed