A Multimodal Deep Learning Method Based on Multiple Medical Images for Fuchs Endothelial Corneal Dystrophy Diagnosis
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Purpose To establish a multimodal deep learning network for the fully automated diagnosis of Fuchs endothelial corneal dystrophy (FECD). Methods The ResNet-50 neural network was trained and validated using patients with FECD, patients with other anterior segment diseases, and healthy controls. Single-modal and multimodal models were developed on the basis of anterior segment photographs, anterior segment OCT images, and IVCM images. Independent test sets were employed to assess the diagnostic performance of the models, with evaluation metrics including precision, recall, F1 score, and accuracy. Results The multimodal model achieved a precision of 0.9663 and a recall of 0.971, with an F1 score of 0.9685, in distinguishing between FECD, other anterior segment diseases, and healthy eyes at the single-eye level. The diagnostic performance was significantly better than that of single-modal models based on anterior segment photographs (F1 score: 0.8664) and anterior segment OCT (F1 score: 0.8334) but slightly better than that of the single-modal model based on IVCM (F1 score: 0.9537). Conclusion This study presents the first multimodal deep learning model for diagnosing FECD, effectively distinguishing it from healthy corneas and various anterior segment disorders. Translational Relevance: This multimodal AI integrates standard slit-lamp photo, OCT and IVCM inputs to deliver instant, technician-level FECD screening without extra hardware. Embedding the ResNet-50 classifier in EMR or cloud platforms could halve unnecessary corneal referrals and prioritise endothelial transplants before cataract surgery.