ML-ConvNet: A Lightweight and Interpretable Unified Architecture for Medical Image Classification Across Modalities

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Medical image classification plays a crucial role in automated diagnostic systems, yet deploying ac-curate deep learning models in resource-constrained clinical settings remains challenging. Conventionalconvolutional neural networks often require high computational resources, limiting their applicability onedge devices. In this study, we propose ML-ConvNet, a lightweight and interpretable convolutional neu-ral network designed as a unified architecture for multiple medical imaging modalities, including MRI,CT, and chest X-rays. The network incorporates a compact convolutional backbone, a Local VarianceWeighted (LVW) loss function to mitigate class imbalance, and a Hierarchical Dual-Pooling Atten-tion (HDPA) module for channel-wise feature refinement and interpretable attention maps. Extensiveexperiments demonstrate that ML-ConvNet achieves competitive classification performance across allmodalities while maintaining a minimal parameter count suitable for edge deployment. Ablation studieshighlight the contribution of individual components to overall accuracy, and repeated runs with multiplerandom seeds confirm stable training dynamics. Edge deployment evaluation indicates low inference la-tency and reduced power consumption on devices such as Raspberry Pi, smartphones, and Edge TPUs,supporting practical usage in clinical workflows. The proposed architecture provides a balance betweencomputational efficiency, interpretability, and predictive performance, offering a practical solution forreal-time medical image classification across diverse modalities.

Article activity feed