Beyond Input Stability: Redefining Adversarial Robustness in Embedded Medical AI

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Current approaches to adversarial robustness in embedded medical AI inherit definitions from general-purpose machine learning— chiefly, stability under small input perturbations measured by Lp norms. We argue this is fundamentally insufficient for life-critical diagnostic devices. Using cardiac wearables with embedded ECG classifiers as our case study, we demonstrate that the adversarial threat surface extends to physical-layer signal injection, supply chain poisoning, and firmware compromise—each with distinct threat actors and real-world feasibility. We provide experimental evidence on the MIT-BIH Arrhythmia Database (87,906 beats) that adversarial attacks cause catastrophic recall degradation: a Random Forest ECG classifier achieving 96.2% arrhythmia recall drops to 50.9% under FGSM attack and 45.3% under a transfer attack using a surrogate model— confirming that model opacity is insufficient defense. We further demonstrate that single-signal physiological anchor checks achieve near-chance AUC (0.274–0.553) against amplitudesmoothing attacks, because such attacks preserve signal-level statistics while corrupting classifier-relevant features. This negative result motivates the core thesis of our Context-Aware Adaptive Inference (CAAI) framework: robust defense requires crossparameter physiological coupling across multiple sensor modalities, not single-signal analysis. We formalize this as an open research problem and characterize the gap between single-modal and multi-modal anchoring as the primary obstacle to clinically deployable adversarial defense.

Article activity feed