The Robustness of Deep Learning Models to Adversarial Attacks in Lung X-ray Classification

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

With the rapid advancement of artificial intelligence (AI) and deep learning, AI-driven models are increasingly being used in the medical field for disease classification and diagnosis. However, the robustness of these models against adversarial attacks is a critical concern, as such attacks can significantly distort diagnostic outcomes, leading to potential clinical errors. This study investigates the robustness of various convolutional neural network (CNN) models, including MobileNet, Resnet-152, and Vision Transformers (ViT), in lung radiograph classification tasks under adversarial conditions. We utilized the "ChestX-ray8" dataset to train and evaluate these models, applying a range of adversarial attack methods, such as FGSM and AutoAttack, to assess the models' resilience. Our findings indicate that while all models experienced a decrease in accuracy after adversarial attacks, MobileNet consistently demonstrated superior robustness compared to other CNN-based models. We also explored the impact of inverse robustness training to enhance model stability. Results seem to prove that the sparser nature of the MobileNet parameters, being the reason for its robustness, will give insight into enhancement of security and dependability within AI models in medical applications. This research underscores the need for continued refinement of AI models to ensure their safe deployment in clinical settings.

Article activity feed