A Hybrid Ensemble Approach for Robust Detection of Adversarial Attacks on Medical X-ray Images

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Medical imaging systems driven by AI are revolutionizing diagnostics, but they are still susceptible to adversarial attacks, which are small, purposefully designed disruptions that can trick algorithms and impair clinical judgment. Therefore, creating strong detection methods is essential to guaranteeing patient safety and preserving confidence in AI-assisted diagnostics \cite{nasim2024ai}. In this work, we develop and thoroughly test three methods for identifying adversarial perturbations in X-ray images, a Random Forest (RF), a Convolutional Neural Network (CNN), and a Hybrid Ensemble model that makes use of their complementary advantages. A carefully selected dataset of 12,677 X-ray images with different perturbation strengths (ϵ) was used to test the models. Under moderate attack conditions (ϵ = 0.02), the Hybrid Ensemble consistently outperformed the standalone models, achieving an accuracy of 97.4%. Crucially, it reduced the most critical errors, or false negatives, to just 15, as opposed to 38 for the Random Forest. Additionally, the ensemble showed better resilience, sustaining a high F1-Score of 97.4% in the face of attacks (ϵ = 0.03), in which case the RF's performance declined noticeably. The suggested Hybrid Ensemble provides a strong, dependable, and clinically applicable way to improve the security and credibility of AI in medical imaging by skillfully combining the CNN's spatial feature learning with the RF's sensitivity to statistical anomalies.

Article activity feed