Investigation of Hardware Testing Utilizing Standard 4-D Convolution and Optimized Deep Convolution Formulas

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The purpose of this study is to discuss the application and performance of the standard four-dimensional convolution calculation and optimization deep convolution formula in hardware testing. With the wide application of convolutional neural networks (CNNs) [1] in image processing, video processing and other fields, how to efficiently complete these computing tasks on resource-limited hardware platforms has become a key problem. Although standard four-dimensional convolution is widely used, its computational complexity and resource consumption limit its application in large-scale convolutional networks. For this reason, deep convolutional optimization techniques are proposed to reduce the computation and memory footprint. However, as CNN goes deeper, parameters required by convolution increase sharply, which makes the on-chip memory solution inefficient [2]. In this study, field programmable gate array (FPGA) was used as a test platform to evaluate the resource consumption difference between standard convolution and deep convolution by comparing their parameters, computing time and power consumption under different hardware conditions. Test results show that deep convolution reduces memory footprint by about 90%, computation time by about 70%, and power consumption by about 50%. Studies have shown that deep convolution performs well on resource-constrained hardware platforms, especially for low-power devices such as mobile terminals and edge computing devices. In summary, deep convolutions provide an efficient and low-power solution for modern convolutional neural network hardware implementations.Key Words: Standard Four-Dimensional Convolution, Deep Convolution, hardware testing, optimization, CNNS.

Article activity feed