The field of artificial intelligence is witnessing a significant shift towards self-supervised learning and neural dynamics. Recent studies have focused on understanding the theoretical foundations of self-supervised representation learning, with a particular emphasis on contrastive losses and their underlying principles. Additionally, there is a growing interest in developing bio-inspired neural models that can capture the complex dynamics of natural neural circuits. These models have shown promising results in terms of performance and interpretability, and are being explored for their potential applications in machine learning and neuroscience. Noteworthy papers in this area include: Understanding Self-supervised Contrastive Learning through Supervised Objectives, which provides a theoretical perspective on self-supervised representation learning. BioOSS: A Bio-Inspired Oscillatory State System with Spatio-Temporal Dynamics, which proposes a novel neural model that emulates the wave-like propagation dynamics of natural neural circuits. Redundancy as a Structural Information Principle for Learning and Generalization, which presents a theoretical framework for understanding the role of redundancy in learning and generalization. On the Optimal Representation Efficiency of Barlow Twins: An Information-Geometric Interpretation, which introduces a novel information-geometric framework for analyzing the efficiency of self-supervised learning algorithms. Learning by Steering the Neural Dynamics: A Statistical Mechanics Perspective, which studies the emergence of robust dynamical attractors in random asymmetric recurrent networks. Learning at the Speed of Physics: Equilibrium Propagation on Oscillator Ising Machines, which demonstrates the potential of physical systems for accelerating machine learning. Information Shapes Koopman Representation, which proposes an information-theoretic Lagrangian formulation for learning Koopman representations. Performance Evaluation of Ising and QUBO Variable Encodings in Boltzmann Machine Learning, which compares the performance of different variable encodings in Boltzmann machine learning. Information flow in multilayer perceptrons: an in-depth analysis, which analyzes the flow of information in multilayer perceptrons. Rethinking Hebbian Principle: Low-Dimensional Structural Projection for Unsupervised Learning, which introduces a novel unsupervised learning method that integrates orthogonality and structural information preservation.