Improving Robust Image Classification Under Common Corruptions: A PDE-Regularized Variational Information Bottleneck Network
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Robust image classification remains challenging because deep Convolutional Neural Networks (CNNs) are highly sensitive to distribution shifts caused by common image corruptions such as noise, blur, and compression artifacts. To address this issue, this study investigates our proposed PDE-CNN-VIB architecture, which combines structural regularization derived from Partial Differential Equation (PDE) operators with information-theoretic feature compression based on the Variational Information Bottleneck (VIB) principle. The architecture was previously evaluated on the CIFAR-10 dataset and is further assessed here on both CIFAR-10 and its corrupted counterpart, CIFAR-10-C, using clean accuracy, negative log-likelihood (NLL), expected calibration error (ECE), and mean corruption accuracy (mCA). The model processes input images through a PDE-based regularization block and lightweight convolutional feature adaptation layers, followed by a VIB module that suppresses task-irrelevant information before the features are reconstructed and forwarded to a ResNet-18 classification backbone. The evaluation shows that the framework improves robustness under common corruptions, particularly for noise-related perturbations, while maintaining favorable clean-image performance and modest computational overhead. These findings indicate that combining PDE-based structural priors with VIB-driven feature compression is a promising approach for improving the reliability of CNNs under distribution shifts.