The field of neuromorphic computing and spiking neural networks (SNNs) is rapidly advancing, with a focus on improving efficiency, scalability, and performance. Recent developments have centered around enhancing the processing capabilities of SNNs, enabling them to handle complex tasks and large-scale data. Notably, innovations in in-storage computing, event-driven processing, and multi-timescale gating have significantly improved the speed, energy efficiency, and accuracy of SNNs. These advancements have far-reaching implications for edge computing, IoT applications, and real-time processing. Noteworthy papers in this area include: FeNOMS, which introduces an in-storage processing architecture leveraging Ferroelectric NAND flash and hyperdimensional computing to achieve a 43x speedup and 21x higher energy efficiency in mass spectrometry data processing. SpikePool, which proposes a spiking transformer with pooling attention to create a selective band-pass filtering effect, demonstrating competitive results on event-based datasets while reducing training and inference time. Local Timescale Gates, which presents a neuron model that combines dual time-constant dynamics with an adaptive gating mechanism, yielding significantly improved accuracy and retention in sequential learning tasks. A Complete Pipeline for deploying SNNs with Synaptic Delays on Loihi 2, which provides a complete pipeline for efficient event-based training and deployment of SNNs on neuromorphic hardware, demonstrating enhanced classification accuracy and significant energy efficiency. SHaRe-SSM, which designs an oscillatory spiking neural network for target variable modeling in long sequences, performing better than transformers or first-order SSMs while circumventing multiplication operations.