Scientific Machine Learning and Generative Models

The field of scientific machine learning is moving towards the development of more interpretable and physically consistent models. Researchers are exploring the use of linear neural networks and Bayesian risk minimization to improve the understanding of complex relationships between physical processes and observed signals. Generative models, such as flow-matching and score-based models, are being extended to incorporate higher-order dynamics and physical constraints, enabling more accurate and efficient sampling from data manifolds. Notable papers in this area include: Optimal Linear Baseline Models for Scientific Machine Learning, which develops a unified theoretical framework for analyzing linear encoder-decoder architectures. Towards High-Order Mean Flow Generative Models, which introduces a novel extension of the MeanFlow framework that incorporates average acceleration fields. When and how can inexact generative models still sample from the data manifold, which investigates the phenomenon of robustness of the support in dynamical generative models. Physics-Constrained Fine-Tuning of Flow-Matching Models for Generation and Inverse Problems, which presents a framework for fine-tuning flow-matching generative models to enforce physical constraints and solve inverse problems.

Sources

Optimal Linear Baseline Models for Scientific Machine Learning

Towards High-Order Mean Flow Generative Models: Feasibility, Expressivity, and Provably Efficient Criteria

When and how can inexact generative models still sample from the data manifold?

Physics-Constrained Fine-Tuning of Flow-Matching Models for Generation and Inverse Problems

Built with on top of