The field of deep learning is witnessing a significant shift towards understanding the underlying principles and phenomena that drive its success. Researchers are moving away from ad hoc explanations and instead focusing on developing broad explanatory theories that can be applied to various deep learning models. This is evident in the increasing interest in compositional sparsity, which refers to the ability of deep neural networks to exploit the compositional structure of target functions. Another area of growing interest is the use of category theory to provide a semantic framework for understanding and structuring AI systems. Parametric models, such as those used in variational problems, are also being re-examined in light of new frameworks and techniques. Noteworthy papers in this area include the proposal of InfinityKAN, which adaptively learns a potentially infinite number of bases for each univariate function during training, and the introduction of the Gauss-Markov Adjunction, which provides a categorical formulation of supervised learning. Additionally, the High-Order Deep Meta-Learning framework enables neural networks to construct, solve, and generalize across hierarchies of tasks, and the position paper on compositional sparsity argues that it is a fundamental principle governing the learning dynamics of deep neural networks.