Classification of Pediatric Dental Diseases from Panoramic Radiographs using Natural Language Transformer and Deep Learning Models

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Accurate classification of pediatric dental diseases from panoramic radiographs is crucial for early diagnosis and treatment planning. This study explores a text-based approach using a natural language transformer to generate textual descriptions of radiographs, which are then classified using deep learning models. Three models were evaluated: a one-dimensional convolutional neural network (1D-CNN), a long short-term memory (LSTM) network, and a pretrained bidirectional encoder representations from transformer (BERT) model for binary disease classification. Results showed that BERT achieved 77% accuracy, excelling in detecting periapical infections but struggling with caries identification. The 1D-CNN outperformed BERT with 84% accuracy, providing a more balanced classification, while the LSTM model achieved only 57% accuracy. Both 1D-CNN and BERT surpassed three pretrained CNN models trained directly on panoramic radiographs, indicating that text-based classification is a viable alternative to traditional image-based methods. These findings highlight the potential of language-based models for radiographic interpretation while underscoring challenges in generalizability. Future research should refine text generation, develop hybrid models integrating textual and image-based features, and validate performance on larger datasets to enhance clinical applicability.

Article activity feed