A Lightweight Explainability Framework for Neural Networks: Methods, Benchmarks, and Mobile Deployment

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Explainability is increasingly crucial for real-world deployment of deep learning models, yet traditional explanation techniques can be prohibitively slow and memory- intensive on resource-constrained devices. This paper presents a novel lightweight ex- plainability framework that significantly reduces the computational cost of generating explanations without compromising on quality. My approach focuses on an optimized Grad-CAM pipeline with sophisticated thresholding, advanced memory handling, and specialized evaluation metrics. I demonstrate speedups exceeding 300x over naive im- plementations while maintaining robust faithfulness and completeness scores. Through an extensive series of benchmarks, user studies, and statistical tests, I show that this framework is scalable, accurate, and deployable on edge devices such as Raspberry Pi, Android phones, and iPhones. I also discuss ethical considerations, future research directions, and potential applications in high-stakes domains like healthcare and au- tonomous systems.

Article activity feed