Adaptive Normalization Enhances the Generalization of Deep Learning Model in Chest X-Ray Classification

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Image normalization plays a critical role in improving the robustness of deep learning models for chest X-ray (CXR) classification, particularly under cross-dataset variability and domain shift. This study evaluates three normalization strategies: min–max scal-ing, Z-score normalization, and an adaptive approach combining percentile-based ROI cropping with histogram standardization—across four benchmark CXR datasets (ChestX-ray14, CheXpert, MIMIC-CXR, and Chest X-ray Pneumonia) and three CNN architectures (a lightweight CNN, EfficientNet-B0, and MobileNetV2). The adaptive method consistently enhances validation accuracy, F1-score, and training stability compared to conventional normalization. MobileNetV2, in particular, achieves the highest F1-score of 0.89 on the Chest-Xray-Pneumonia dataset under domain shift. Sta-tistical analyses using Friedman–Nemenyi and Wilcoxon signed-rank tests confirm that these performance gains are significant. The results also indicate improved cali-bration and reduced overfitting when adaptive normalization is applied. By address-ing variability in both image quality and ROI localization, this study demonstrates that adaptive preprocessing is an important design choice for developing reliable and gen-eralizable AI models in radiological applications.

Article activity feed