Anterior vertical relationship: Validation of an Artificial Intelligence Model vs. Digitally assisted Human Observers
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Objective This study aimed to develop and validate an artificial intelligence (AI) system for measuring and categorizing anterior vertical relationships, and to evaluate its performance against manual assessments by a human observer. Materials and Methods The study was structured in three phases: model training, validation, and final testing. A dataset of 750 intraoral frontal photographs from patients treated at the University of … was used for training and validation, while 300 additional intraoral images and scans formed the testing set. A YOLO (You Only Look Once) v8 Pose Model was developed to perform automated tooth segmentation, followed by measurement and classification of anterior vertical relationships according to the Index of Complexity, Outcome, and Need (ICON). Manual measurements on intraoral scans were obtained using OrthoCAD software. Agreement between AI and human classifications was assessed with the Kappa statistic, while chi-square tested goodness-of-fit. Diagnostic performance was evaluated using sensitivity, specificity, predictive values, likelihood ratios, accuracy, and area under the curve (AUC). Results The AI system achieved 92% accuracy with excellent agreement to manual assessments (Kappa = 0.89, p < 0.0001). Discrepancies were minimal at 3%. For deep bite detection, sensitivity was 95.9%, specificity 100%, and accuracy 97.2% (AUC = 0.979). For open bite detection, sensitivity reached 96.3%, specificity 100%, and accuracy 98.5% (AUC = 0.98). Conclusion The AI model demonstrated high accuracy and excellent agreement with manual measurements, confirming its potential as a reliable and objective tool for automated quantification of anterior vertical relationships in orthodontic diagnosis.
