Advances in Spiking Neural Networks

The field of Spiking Neural Networks (SNNs) is rapidly advancing, with a focus on improving energy efficiency, scalability, and performance. Researchers are exploring novel architectures, conversion methods, and optimization techniques to overcome the challenges associated with SNNs. One of the key directions is the development of energy-oriented computing architecture simulators, which can help identify optimal architectures for SNN training. Another area of research is the conversion of Artificial Neural Networks (ANNs) to SNNs, with techniques such as error compensation learning and proxy target frameworks showing promising results. Additionally, there is a growing interest in designing biologically plausible yet scalable spiking neurons in hardware, as well as improving the performance of spike-based deep Q-learning using ternary neurons. Noteworthy papers include:

  • Energy-Oriented Computing Architecture Simulator for SNN Training, which proposes a novel simulator to identify optimal architectures for SNN training.
  • Proxy Target: Bridging the Gap Between Discrete Spiking Neural Networks and Continuous Control, which introduces a proxy target framework to stabilize SNN training in continuous control tasks.
  • Efficient ANN-SNN Conversion with Error Compensation Learning, which presents a novel conversion framework based on error compensation learning to mitigate conversion errors.
  • SENMAP: Multi-objective data-flow mapping and synthesis for hybrid scalable neuromorphic systems, which introduces a flexible mapping software for efficiently mapping large SNN and ANN applications onto adaptable architectures.
  • Optimal Spiking Brain Compression: Improving One-Shot Post-Training Pruning and Quantization for Spiking Neural Networks, which proposes a new one-shot post-training pruning/quantization framework for SNNs.

Sources

Energy-Oriented Computing Architecture Simulator for SNN Training

Proxy Target: Bridging the Gap Between Discrete Spiking Neural Networks and Continuous Control

Efficient ANN-SNN Conversion with Error Compensation Learning

Minimal Neuron Circuits -- Part I: Resonators

Improving Performance of Spike-based Deep Q-Learning using Ternary Neurons

SENMAP: Multi-objective data-flow mapping and synthesis for hybrid scalable neuromorphic systems

Optimal Spiking Brain Compression: Improving One-Shot Post-Training Pruning and Quantization for Spiking Neural Networks

Built with on top of