The field of spiking neural networks (SNNs) is rapidly advancing, with a focus on developing efficient training algorithms and novel architectures that mimic the behavior of biological neurons. Recent research has led to the development of innovative training methods, such as ADMM-based training, which addresses the non-differentiability of the SNN step function. Additionally, new architectures like CogniSNN, which utilizes random graph architecture, have shown great potential in improving the expandability and neuroplasticity of SNNs. Other notable advancements include the development of energy-efficient SNNs for background subtraction and few-shot learning, as well as the introduction of spike-driven video Transformers with linear temporal complexity. These advancements have significant implications for the field, enabling more efficient and accurate processing of complex data. Noteworthy papers include: CogniSNN, which achieves 95.5% precision in the DVS-Gesture dataset, and SpikeVideoFormer, which achieves state-of-the-art performance on video tasks while offering significant efficiency gains.
Spiking Neural Networks: Efficient Training and Novel Architectures
Sources
CogniSNN: A First Exploration to Random Graph Architecture based Spiking Neural Networks with Enhanced Expandability and Neuroplasticity
Input-Specific and Universal Adversarial Attack Generation for Spiking Neural Networks in the Spiking Domain