The field of spiking neural networks (SNNs) is moving towards more efficient and effective processing methods. Researchers are exploring new architectures and techniques to improve the performance of SNNs, including the use of shallow-level temporal feedback and symmetric mixing frameworks. These innovations aim to address the limitations of traditional SNNs, such as high energy consumption and low accuracy. Notable papers in this area include: STF, which proposes a lightweight plug-and-play module for enhancing spiking transformers, and S$^2$M-Former, which introduces a novel spiking symmetric mixing framework for brain auditory attention detection. E2ATST and Live Demonstration: Neuromorphic Radar for Gesture Recognition also present interesting approaches to energy-efficient architectures and event-driven processing frameworks.