Optimizing Lightweight Medical AI for Chest CT Classification: A Distillation and Quantization Approach
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Medical imaging has been crucial in the diagnostics of pulmonary diseases and the use of chest CT scans is a fundamental diagnostic tool in lung cancer and COVID-19. The clinical importance of the deep learning models used to classify CT images is still hard to deploy because of the high-computational requirements and overfitting. The latest state-of-the-art CNN models, including DenseNet121 and NasNetMobile, use the train model with near-perfect accuracy yet have poor generalization and demand large memory footprint (>2.4 GB), making them unfeasible to apply in healthcare settings with limited resources. To solve this issue, we introduce an end-to-end knowledge distillation and post-training quantization system that can convert the large, overtrained electronics teacher models into small and generalized student networks that can be deployed to the real world to accomplish medical AI. Knowledge distillation allows the student models to study hard labels, as well as softened probabilistic outputs of the teacher, enhancing generalization and reducing overfitting. The post-training quantization also minimizes the model size by shrinking both weights and activations to the 8-bit precision to allow inference with low accuracy degradation. The experiments were run on the Chest CT-Scan Images Dataset (1,252 samples, balanced classes of COVID-19 and non-COVID-19) of the Kaggle standardized and augmented to evaluate well. A variety of teachers were trained (DenseNet121, ResNet50, EfficientNetB3, VGG16/19, Xception, and NasNetMobile) and distilled into small students and quantized to be deployed. The presented pipeline reduced the memory usage (approximately 2,465 MB to approximately 618 MB) by a factor of 4 with the quantized DenseNet121 student being able to reach 91.4% validation accuracy as compared to its teacher (77.2%). There was also better generalization by distilled students, where EfficientNetB3 and NasNetMobile attained +42% and +30% validation gains respectively. This paper offers a deployable and resource-efficient medical AI architecture, which is capable of striking a balance between diagnostic accuracy and computational efficiency. The findings reveal that knowledge distillation and quantization can be combined to provide lightweight, high-performing chest CT classifiers to mobile CT devices, edge devices, and low-resource clinical settings as one step towards closing the gap between research-level AI systems and those that can be deployed significantly more effectively in the clinic.