Spiking Neural Networks: Efficient Training and Novel Architectures

The field of spiking neural networks (SNNs) is rapidly advancing, with a focus on developing efficient training algorithms and novel architectures that mimic the behavior of biological neurons. Recent research has led to the development of innovative training methods, such as ADMM-based training, which addresses the non-differentiability of the SNN step function. Additionally, new architectures like CogniSNN, which utilizes random graph architecture, have shown great potential in improving the expandability and neuroplasticity of SNNs. Other notable advancements include the development of energy-efficient SNNs for background subtraction and few-shot learning, as well as the introduction of spike-driven video Transformers with linear temporal complexity. These advancements have significant implications for the field, enabling more efficient and accurate processing of complex data. Noteworthy papers include: CogniSNN, which achieves 95.5% precision in the DVS-Gesture dataset, and SpikeVideoFormer, which achieves state-of-the-art performance on video tasks while offering significant efficiency gains.

Sources

ADMM-Based Training for Spiking Neural Networks

CogniSNN: A First Exploration to Random Graph Architecture based Spiking Neural Networks with Enhanced Expandability and Neuroplasticity

Input-Specific and Universal Adversarial Attack Generation for Spiking Neural Networks in the Spiking Domain

Sigma-Delta Neural Network Conversion on Loihi 2

SAEN-BGS: Energy-Efficient Spiking AutoEncoder Network for Background Subtraction

Self-cross Feature based Spiking Neural Networks for Efficient Few-shot Learning

Convolutional Spiking Neural Network for Image Classification

LAS: Loss-less ANN-SNN Conversion for Fully Spike-Driven Large Language Models

Spike-timing-dependent Hebbian learning as noisy gradient descent

SpikeVideoFormer: An Efficient Spike-Driven Video Transformer with Hamming Attention and $\mathcal{O}(T)$ Complexity

ILIF: Temporal Inhibitory Leaky Integrate-and-Fire Neuron for Overactivation in Spiking Neural Networks

Built with on top of