Advances in Spiking Neural Networks

The field of spiking neural networks (SNNs) is rapidly evolving, with a focus on improving energy efficiency and performance. Recent developments have led to the creation of novel architectures and training methods that address long-standing challenges in the field. One key area of research is the integration of SNNs with vision transformer architectures, which has shown great potential for energy-efficient and high-performance computing paradigms. Another area of focus is the development of new training methods, such as spike-synchrony-dependent plasticity, which encourages neurons to form coherent activity patterns and supports stable and scalable learning. Noteworthy papers in this area include:

  • STEP, a unified benchmark framework for Spiking Transformers that provides modular support for diverse components and allows for systematic ablation studies.
  • ASRC-SNN, which proposes the Adaptive Skip Recurrent Connection as a replacement for the vanilla recurrent structure, effectively mitigating the gradient vanishing problem and enhancing long-term temporal modeling performance.
  • TDFormer, a novel model with a top-down feedback structure that leverages high-order representations from earlier time steps to modulate the processing of low-order information at later stages, achieving state-of-the-art performance on ImageNet.

Sources

STEP: A Unified Spiking Transformer Evaluation Platform for Fair and Reproducible Benchmarking

ASRC-SNN: Adaptive Skip Recurrent Connection Spiking Neural Network

Spiking Neural Networks with Temporal Attention-Guided Adaptive Fusion for imbalanced Multi-modal Learning

MSVIT: Improving Spiking Vision Transformer Using Multi-scale Attention Fusion

Beyond Pairwise Plasticity: Group-Level Spike Synchrony Facilitates Efficient Learning in Spiking Neural Networks

TDFormer: A Top-Down Attention-Controlled Spiking Transformer

Built with on top of