Eleflai: Towards primary care-centered AI-driven mobile otoscope

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Ear diseases are among the leading causes of medical consultation, antibiotic misuse, and hearing loss, affecting over one billion people worldwide. Early and accurate diagnosis is critical to alleviating this burden, yet remains challenging in primary care owing to specialist shortages, nonspecific early symptoms, and similar subtypes. Here we present OTO160K, the first large-scale ear feature dataset, comprising 166,792 otoendoscopic/otoscopic images from 81,726 participants aged 1–99 years. Using this resource, we developed Eleflai, a mobile otoscope integrated with a self-supervised object detection model. Eleflai enables real-time image acquisition and automatically detects eight abnormalities (such as perforation, hyperemia, and secretion) and three anatomical structures essential for diagnosing common ear diseases including otitis media. In multicenter testing, Eleflai achieved mean average precision values of 0.751–0.779 for 11 ear features at an intersection-over-union threshold of 0.5. In human–AI comparative experiments, it obtained a mean recall of 0.722, exceeding that of all well-trained primary care providers (0.527–0.674). In a multi-reader, multi-case study, Eleflai-assisted clinical training improved the diagnostic accuracy of primary care providers from 56.94% (95% CI: 52.52%–61.25%) to 78.84% (95% CI: 75.01%–82.23%) within one week, significantly outperforming the control group (p < 0.05). These findings highlight the potential of Eleflai to improve the accessibility and equity of ear care worldwide, particularly in primary care settings such as community screening and family health monitoring.

Article activity feed