SentinelAI: Cross-Modal Adversarial Defense for Edge AI via Physical-Layer Anomaly Correlation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Deploying deep neural networks on edge and mobile devices exposes them to both digital adversarial threats—adversarial examples, backdoor triggers, model extraction—and physical-layer information leakage through electromagnetic (EM) emanations, power consumption traces, and acoustic emissions from the inference hardware itself. We observe that these two threat dimensions are fundamentally coupled: adversarial inputs induce abnormal computational patterns that produce detectable anomalies in the device’s physical side-channel emissions, creating a unique defense opportunity. We present SENTINELAI, a cross-modal defense framework that detects adversarial attacks on edge AI by correlating anomalies across the computational layer (input features, activation distributions, output confidence) and the physical layer (EM, power, and timing traces captured during inference). SENTINELAI comprises three components: (1) a Computational Anomaly Detector (CAD) that identifies suspicious inputs using activation-pattern analysis without modifying the target model; (2) a Physical Trace Verifier (PTV) that cross-references the side-channel signature of inference computation against a profile of legitimate behavior; and (3) a Cross-Modal Fusion Engine (CMFE) that combines both signals via an attention-based architecture to make robust detection decisions. We evaluate SENTINELAI on 8 edge platforms across 6 adversarial attack types, 4 datasets, and 3 model architectures. SENTINELAI achieves 97.4% attack detection rate with 1.6% false positive rate, outperforming the best computational-only detector by 12.8 percentage points and the best physical-only detector by 19.3 percentage points, while adding only 3.1 ms latency per inference.