The field of computer systems and neural networks is moving towards more efficient and power-conscious designs. Recent developments have focused on improving the performance of sorting architectures and spiking neural networks (SNNs) while reducing their area and power consumption. One of the key trends is the use of innovative number generators and sorting modules that can operate without conventional comparators, leading to significant reductions in area and power consumption. Another area of research is the development of adaptive bit allocation strategies for SNNs, which can improve their efficiency and accuracy by allocating memory and computation resources in a fine-grained manner. Furthermore, researchers are exploring the use of unified memcapacitor-memristor memory for synaptic weights and neuron temporal dynamics, enabling the simultaneous control of spatial and temporal dynamics in recurrent spiking neural networks. Noteworthy papers include:
- A novel ascending-order unary sorting module featuring a finite-state-machine-based unary number generator that reduces implementation costs.
- An adaptive bit allocation strategy for direct-trained SNNs that achieves fine-grained layer-wise allocation of memory and computation resources.
- A compact two-layer spiking neural network optimized for efficient spike sorting, which leverages the Locally Competitive Algorithm for sparse coding and operates entirely unsupervised.