A Novel Obfuscation Method Based on Majority Logic for Preventing Unauthorized Access to Binary Deep Neural Networks

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The significant expansion of deep learning applications has necessitated safeguarding the deep neural network (DNN) model from potential unauthorized access, highlighting its importance as a valuable asset. This study proposes an innovative key-based algorithm-hardware co-design methodology to protect deep neural network (DNN) models from unauthorized access. The proposed approach significantly reduces model accuracy when an incorrect key is used, thereby preventing unauthorized users from accessing the design. The significance and advancements of binary neural networks (BNNs) in the hardware implementation of cutting-edge DNN models have led us to develop our methodology for BNNs. However, the proposed technique can be broadly applied to various designs for implementing neural network accelerators. The proposed protective approach increases efficiency more than similar solutions across different BNN architectures and standard datasets. We validate our proposed hardware design using post-layout simulations using the Cadence Virtuoso tool and the well-established TSMC 40nm CMOS technology. The proposed approach yields reductions of 43%, 79%, and 71% in area, average power, and weight modification energy per filter in the neural network structures. Additionally, the security of the key circuit has been analyzed and evaluated against Boolean satisfiability-based attacks, structural attacks, reverse engineering, and power-based side-channel attacks.

Article activity feed