Efficient and Robust Convolutional Neural Network Design for Resource-Constrained Image Recognition Systems

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in visual recognition tasks through hierarchical feature learning . However, recent architectures increasingly rely on aggressive depth and parameter scaling, resulting in substantial computational overhead that restricts deployment on resource-constrained platforms. This paper introduces LiteRobustNet, a lightweight yet robust CNN framework jointly optimized for classification accuracy, computational efficiency, and real-world reliability. Three architectures—a baseline CNN, a deep CNN, and the proposed optimized model—are systematically evaluated on the CIFAR-10 dataset across predictive performance, parameter efficiency, inference latency, robustness to common image distortions, and deployment-level optimization. Experimental results demonstrate that LiteRobustNet reduces parameter count by 95.8\%relative to the deep CNN while retaining nearly 90\% of its classification accuracy. Robustness-aware training significantly improves stability under noise, blur, and resolution degradation \cite{6}, \cite{7}. Furthermore, post-training quantization compresses the model to 0.066 MB and accelerates inference by approximately 40\%, enabling practical edge deployment. These findings highlight the effectiveness of efficiency-aware architectural design combined with robustness optimization for real-time intelligent vision systems..

Article activity feed