Deep Learning and Parametric Models: Emerging Trends

The field of deep learning is witnessing a significant shift towards understanding the underlying principles and phenomena that drive its success. Researchers are moving away from ad hoc explanations and instead focusing on developing broad explanatory theories that can be applied to various deep learning models. This is evident in the increasing interest in compositional sparsity, which refers to the ability of deep neural networks to exploit the compositional structure of target functions. Another area of growing interest is the use of category theory to provide a semantic framework for understanding and structuring AI systems. Parametric models, such as those used in variational problems, are also being re-examined in light of new frameworks and techniques. Noteworthy papers in this area include the proposal of InfinityKAN, which adaptively learns a potentially infinite number of bases for each univariate function during training, and the introduction of the Gauss-Markov Adjunction, which provides a categorical formulation of supervised learning. Additionally, the High-Order Deep Meta-Learning framework enables neural networks to construct, solve, and generalize across hierarchies of tasks, and the position paper on compositional sparsity argues that it is a fundamental principle governing the learning dynamics of deep neural networks.

Sources

Not All Explanations for Deep Learning Phenomena Are Equally Valuable

Sectional Kolmogorov N-widths for parameter-dependent function spaces: A general framework with application to parametrized Friedrichs' systems

The Gauss-Markov Adjunction: Categorical Semantics of Residuals in Supervised Learning

Variational Kolmogorov-Arnold Network

Position: A Theory of Deep Learning Must Include Compositional Sparsity

High-Order Deep Meta-Learning with Category-Theoretic Interpretation

Built with on top of