The field of neural networks is moving towards improving interpretability and efficiency. Researchers are exploring new methods to disentangle complex neural networks and understand how they make decisions. This includes developing new metrics to quantify polysemanticity in neurons and identifying brain-like representations in artificial neural networks. Additionally, there is a focus on improving the scalability and performance of neural networks, including the use of local learning rules and metric learning. Noteworthy papers include: Scalable Equilibrium Propagation via Intermediate Error Signals for Deep Convolutional CRNNs, which achieves state-of-the-art performance on the CIFAR-10 and CIFAR-100 datasets. Disentangling Polysemantic Neurons with a Null-Calibrated Polysemanticity Index and Causal Patch Interventions, which introduces a new metric to quantify polysemanticity in neurons. NM-Hebb: Coupling Local Hebbian Plasticity with Metric Learning for More Accurate and Interpretable CNNs, which proposes a two-phase training framework that integrates neuro-inspired local plasticity with distance-aware supervision.