SNNs Are Not Transformers (Yet): The Architectural Problems for SNNs in Modeling Long-Range Dependencies
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Spiking neural networks (SNNs) have attracted growing interest for their ability to operate efficiently on low-power neuromorphic hardware, offering a biologically grounded route toward energy-efficient computation. However, despite advances in large-scale neuromorphic systems capable of simulating millions of spiking neurons and synapses, SNNs continue to underperform state-of-the-art (SOTA) artificial neural networks (ANNs) on complex sequence-processing tasks.
Here, we present an explicit covering-number bound analysis for SNNs based on the non-leaky integrate and fire (nLIF) model. Leveraging recent work on causal partitions and local Lipschitz continuity, we derive a global Lipschitz constant and show that the sample complexity of nLIF networks scales quadratically with input sequence length. We analytically compare these bounds with those of Transformer and recurrent neural network (RNN) architectures, revealing fundamental constraints on how current SNNs process long-range dependencies. Finally, we show that these theoretical assumptions align with known cortical mechanisms, particularly inhibitory normalization and refractoriness, and discuss their implications for developing future neuromorphic architectures that more closely approximate biological computation.