A Real-World Comparison of Three Deep-Learning Systems for Diabetic Retinopathy in Remote Australia

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background/objective: Deep learning tools may improve access to screening for diabetic retinopathy, a leading cause of vision loss. Therefore, the aim was to prospectively compare the performance of three deep learning systems (DLSs); Google ARDA, Thirona RetCADTM, and, EyRIS SELENA+ for detection of referable diabetic retinopathy (DR), in a real-world setting. Methods: Participants with diabetes presented to a mobile facility for DR screening in the remote Pilbara region of Western Australia, which has a high proportion of First Nations people. Sensitivity, specificity, and other performance indicators were calculated for each DLS, compared to grading by an ophthalmologist adjudication panel. Cochran’s Q with post-hoc Dunn test assessed differences in DLS performance. Results: 188 colour fundus photographs were assessed; 39 images had referable DR, 135 had no referable DR and 14 images were ungradable. The sensitivity/specificity of ARDA was 100% (95% CI: 91.03-100%) / 94.81% (89.68-97.47%), RetCAD was 97.37% (86.50-99.53%) / 97.01% (92.58-98.83%) and SELENA+ was 91.67% (78.17-97.13%) / 80.80% (73.02-86.74%). DLS performance remained high in First Nations people. ARDA and RetCADTM results were not statistically different to the ophthalmologist grading (p≥0.415). Conclusions: In a real-world study comprising majority First Nations people, DLSs had high sensitivity and specificity for detecting referable DR. Implementation may augment DR screening rates, and with appropriate referral and treatment pathways may prevent vision loss and improve health equity.

Article activity feed