Comprehensive aortic stenosis characterization using multi-view deep learning
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background and Aims
Accurate assessment of aortic stenosis (AS) requires integration of both structural and functional information characterized by visual traits as well as quantitation of gradients. Existing artificial intelligence (AI) models utilize solely either structural or functional information.
Methods
We developed EchoNet-AS, an open-source end-to-end integrated approach combining video based convolutional neural networks to assess valve motion as well as segmentation models to automate the measurement of aortic valve peak velocity to classify AS severity.
Results
EchoNet-AS was trained on 210,193 images from 16,076 studies from Kaiser Permanente Northern California (KPNC) and validated on 1,589 held-out test studies and a temporally distinct cohort of 19,206 studies. The final model was also externally validated on 2,415 studies from Stanford Healthcare (SHC) and 9,038 studies from Cedars-Sinai Medical Center (CSMC). Combining assessments from multiple echocardiographic videos and Doppler measurements, EchoNet-AS achieved excellent discrimination of severe AS with AUC 0.964 [95% CI: 0.952 – 0.973] in the KPNC held-out cohort and 0.985 [0.981 – 0.988] in the temporally distinct cohort, which was superior to models using single views or only Doppler measurements. The performance was consistently robust in distinct external cohorts with an AUC 0.985 [0.975 – 0.992] at SHC and 0.989 [0.986 – 0.992] at CSMC.
Conclusions
EchoNet-AS synthesizes information from both B-mode videos and Doppler images to accurately assess AS severity. Its strong performance generalizes robustly to external validation cohorts and shows potential as an automated clinical decision support tool.