The field of neural networks and computing is moving towards more energy-efficient solutions. Researchers are exploring innovative ideas such as physical reservoir computing, differentiable logic gate networks, and spiking neural networks to reduce energy consumption. These approaches have shown promising results in improving performance while minimizing energy requirements. Notably, the use of binary stochastic units, probabilistic bits, and input-aware multi-level spiking mechanisms have led to significant advancements in energy efficiency. Moreover, the development of robust frameworks for simulating and accelerating spiking neural networks on low-end FPGAs has made these solutions more accessible. Theoretical models have also been developed to capture the behavior of neural populations under metabolic stress, providing valuable insights into optimal population codes. Noteworthy papers include:
- A Method for Optimizing Connections in Differentiable Logic Gate Networks, which introduces a novel method for partial optimization of connections in Deep Differentiable Logic Gate Networks, resulting in improved performance with fewer gates.
- IML-Spikeformer: Input-aware Multi-Level Spiking Transformer for Speech Processing, which presents a spiking Transformer architecture that achieves competitive performance on large-scale speech processing tasks while reducing theoretical inference energy consumption.