Dynamic Token Masking in Spiking Neural Network

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Spiking Neural Networks (SNNs) offer an energy-efficient alternative to conventional artificial neural networks (ANNs). Although ANN-to-SNN conversion has emerged as a promising direction for developing high-performance spike-driven models with reduced training complexity, it still faces the challenge of maintaining energy efficiency in converted SNNs. In this paper, we address these limitations by introducing a dynamic spiking token mixer, inspired by the strong information redundancy present in the spike self-attention mechanism. Our approach effectively replicates the selective processing capabilities of self-attention through dynamic token masking (DynMask), with layer-specific masking ratios customized according to both spatial and temporal significance. Comprehensive results establish DynMask as a practical step toward efficient deep learning systems. Specifically, DynMask achieves performance gains of up to +3.23% on ImageNet-1K while narrowing the accuracy gaps to as low as +0.02% compared to ANNs, with simultaneous energy consumption reductions of up to 44×. Moreover, our approach successfully extends to complex vision tasks that remain largely unexplored in SNN literature, including COCO detection and ADE20K segmentation.

Article activity feed