Cannistraci-Hebb Training of Convolutional Neural Networks

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Dynamic sparse training enables neural networks to evolve their topology during training, which reduces computational overhead while maintaining performance. Cannistraci-Hebb Training (CHT), a brain-inspired method based on epitopological learning principles, has demonstrated significant advantages in building ultra-sparse fully connected networks. However, its application to convolutional neural networks (CNNs) faces challenges due to two fundamental constraints inherent in CNNs: receptive field locality and weight-sharing dependency. These constraints prevent the independent link manipulation that is essential for existing CHT frameworks. We propose CHT-Conv, extending CHT to convolutional layers while adhering to the inherent constraints of convolutional layers. Experiments on CIFAR-10 and CIFAR-100 using VGG16 architectures show CHT-Conv achieves competitive or superior performance compared to Sparse Evolutionary Training (SET) at 50% and 70% sparsity levels. This work represents the first successful extension of epitopological learning principles to convolutional architectures, opening new possibilities for brain-inspired sparse training in modern deep learning.

Article activity feed