The field of neuromorphic computing is moving towards more biologically inspired and efficient models. Researchers are exploring the use of dendritic computing, which mimics the structure and function of neurons in the brain, to improve computational complexity and learning capacity. This approach has shown significant potential in reducing the number of trainable parameters and improving performance in edge applications. Another area of focus is the optimization of low-complexity machine learning algorithms, such as the Tsetlin Machine, through innovative implementations of key operations like population count. These developments are expected to lead to more efficient and effective neuromorphic systems. Noteworthy papers include: Dendritic Computing with Multi-Gate Ferroelectric Field-Effect Transistors, which proposes a novel neuron design that leverages ferroelectric nonlinearity for local computations, and Efficient FPGA Implementation of Time-Domain Popcount for Low-Complexity Machine Learning, which presents an innovative approach to accelerate and optimize population count operations using programmable delay lines and arbiters.