The field of neuromorphic computing and spiking neural networks (SNNs) is rapidly advancing, with a focus on developing more efficient, scalable, and biologically-inspired models. Recent research has explored the use of stochastic equilibrium propagation, parallelism in FPGA-based accelerators, and compression and inference techniques for SNNs on resource-constrained hardware. These innovations aim to improve the performance and energy efficiency of SNNs, making them more suitable for deployment on edge devices and in real-world applications. Notable papers in this area include the proposal of a stochastic EP framework for training SNNs, which achieves state-of-the-art performance on vision benchmarks while preserving locality, and the development of a lightweight C-based runtime for SNN inference on edge devices, which enables efficient execution of SNNs on conventional embedded platforms. Other notable papers include the introduction of SpikeNM, a semi-structured N:M pruning framework for SNNs, and the proposal of PACE, a dataset distillation framework for fast SNN training. Overall, these advances demonstrate the potential of neuromorphic computing and SNNs to enable efficient, low-power, and adaptive intelligence in a wide range of applications.
Advances in Neuromorphic Computing and Spiking Neural Networks
Sources
Learning from Dense Events: Towards Fast Spiking Neural Networks Training via Event Dataset Distillatio
LILogic Net: Compact Logic Gate Networks with Learnable Connectivity for Efficient Hardware Deployment
DS-ATGO: Dual-Stage Synergistic Learning via Forward Adaptive Threshold and Backward Gradient Optimization for Spiking Neural Networks
MS2Edge: Towards Energy-Efficient and Crisp Edge Detection with Multi-Scale Residual Learning in SNNs