The field of scientific machine learning is shifting towards a greater emphasis on interpretability, with researchers seeking to uncover the fundamental principles governing complex systems. This movement is driven by the need to integrate machine learning findings into the broader scientific knowledge base, rather than simply relying on predictive models. Recent work has focused on developing operational definitions of interpretability, emphasizing the importance of understanding mechanisms over mathematical sparsity. Additionally, advancements in geometric and probabilistic frameworks are providing new tools for analyzing and interpreting neural networks, including the development of deterministic bounds and random estimates of metric tensors, and novel methods for understanding task representations. Noteworthy papers include:
- The proposal of an operational definition of interpretability for the physical sciences, highlighting the distinction between sparsity and interpretability.
- The introduction of a neural framework for learning conditional optimal transport maps, enabling broader applications of optimal transport principles to complex domains.