Robust Memory Efficient Hybrid Adaptive Binarized Neural Network for Error Detection and Augmentation

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Binarized convolutional networks offer significant advantages in terms of computational efficiency and energy savings, making them suitable for deployment on various hardware platforms, including CPUs, FPGAs, ASICs, and GPUs. Our proposed work presents an approach to improving the training stability and accuracy of binarized neural networks by leveraging approximation techniques for binary weights and long-tailed activation binarization. These methods balance between tight approximation and effective back-propagation, addressing the challenges posed by derivative binarization. In this study, we introduce binarized neural network (BNN) training while maintaining stability, accuracy, and energy efficiency, we propose a Hybrid Adaptive Binarized Neural Network (HABNet). The key aspects of this architecture are: 1)To enhance model convergence and generalization, we incorporate L1 regularization and pre-activation throughout our architecture as soft sign lowered the performance, we used hard sign in our method. 2)Additionally, we introduce Reduced Approximate Stochastic Depth on Pre-activation, a framework that dynamically shrinks the network depth during training while maintaining full depth during inference. This approach improves feature reuse, training efficiency, and network reliability.3)We further optimize our framework using CPU-efficient operations such as Binarycount() and xnor(), facilitating high-performance matrix multiplication for binarized networks. In addition to that we propose this system an Adversarial Trained Dual Batch Normalization to binarizing the weights and pre-activation for the model to be trained for error detection and augmentation, hence our system achieves robustness. Our methodology is evaluated against state-of-the-art binarization approaches, demonstrating superior accuracy, stability, and energy efficiency. Experimental results validate our approach, highlighting its potential for energy-efficient deep learning applications. In this proposed study we explore our methods using the ILSVRC Databases and VGGNet, Imagenet and CIFAR-10 Architectures, we increase the depth of a ResNet beyond 1000 layers and still obtain significant improvements in test error.

Article activity feed