The field of machine learning is moving towards more interpretable and explainable models. Recent developments have focused on creating frameworks that can provide insights into the decision-making processes of complex models. One of the key directions is the use of kernel methods, which provide a theoretically grounded framework for non-linear and non-parametric learning. Another area of research is the development of methods for analyzing and interpreting the features used by trained neural networks. Noteworthy papers in this area include the introduction of the loss kernel, a geometric probe for deep learning interpretability, and the use of the empirical neural tangent kernel to surface the features used by trained neural networks. Additionally, there have been significant advances in the development of methods for imputing missing data, including the use of tensor trains and implicit neural representations.