A Practical Tutorial on Spiking Neural Networks: Comprehensive Review, Models, Experiments, Software Tools, and Implementation Guidelines
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Spiking Neural Networks (SNNs) provide a biologically inspired, event-driven alternative to Artificial Neural Networks (ANNs) with the potential to deliver competitive accuracy at substantially lower energy. This tutorial-study offers a unified, practice-oriented assessment combining critical reviews and standardized experiments. We benchmark a shallow Fully Connected Network (FCN) on MNIST and a deeper VGG7 architecture on CIFAR-10 across multiple neuron models (leaky Integrate-and-Fire (LIF), Sigma-Delta, etc.) and input encodings (direct, rate, temporal, etc.) using supervised surrogate-gradient training, implemented with Intel Lava/SLAYER, SpikingJelly, Norse and PyTorch. Empirically, we observe a consistent but tunable trade-off between accuracy and energy. On MNIST, Sigma-Delta neurons with rate or Sigma-Delta encodings reach 98.1% (ANN: 98.23%). On CIFAR-10, Sigma-Delta neurons with direct input achieve 83.0% at just 2 time steps (ANN: 83.6%). A GPU-based operation-count energy proxy indicates many SNN configurations operate below the ANN energy baseline; some frugal codes minimize energy at the cost of accuracy, whereas accuracy-leaning settings (e.g., Sigma-Delta with direct or rate coding ) narrow the performance gap while remaining energy-conscious, yielding up to 3-fold efficiency versus matched ANNs in our setup. Thresholds and the number of time steps are decisive: intermediate thresholds and the minimal time window that still meets accuracy targets typically maximize efficiency per joule. We distill actionable design rules: choose the neuron/encoding pair by application goal (accuracy-critical vs. energy-constrained) and co-tune thresholds and time steps. Finally, we outline how event-driven neuromorphic hardware can amplify these savings through sparse, local, asynchronous computation, providing a practical playbook for embedded, real-time, and sustainable AI deployments.