Image Quality Evaluation of Panoramic Radiographs Using Vision Transformer: A Pilot Study

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background This study assesses the performance of a Vision Transformer (ViT)-based algorithm designed for the automatic detection of image quality defects in panoramic radiographs (PRs). Methods A total of 1806 anonymized PRs were retrospectively collected and randomly divided into training, validation, and test sets in a 4:1:1 ratio. Six categories of image quality defects were defined: foreign objects, image coverage, symmetry, head position, chin position, and tongue position. A ViT based model was developed, trained, and fine-tuned. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). The model’s inference speed was also measured. Results The model achieved AUC values of 0.96, 0.96, 0.61, 0.62, 0.88, and 0.93 for detecting foreign objects, image coverage errors, symmetry defects, head positioning errors, chin positioning errors, and tongue positioning errors, respectively. The average processing time per image was 0.03 ± 0.002 seconds, indicating efficient real-time performance. Conclusions The proposed ViT-based deep learning algorithm demonstrates effective performance in detecting image quality defects in PRs. Its rapid processing speed and capability for real-time feedback highlight its potential as a valuable tool for quality control and operator training in clinical settings.

Article activity feed