Uncertainty Quantification and Diffusion Models in Machine Learning

The field of machine learning is moving towards a greater emphasis on uncertainty quantification and the development of more robust models. This is evident in the increasing use of diffusion models, which have shown great promise in generating complex, high-dimensional data and modeling uncertainty. Recent work has focused on improving the accuracy and efficiency of these models, as well as their ability to handle out-of-distribution data. Notable papers in this area include Bayesian E(3)-Equivariant Interatomic Potential with Iterative Restratification of Many-body Message Passing, which introduces a new approach to uncertainty quantification in machine learning potentials, and EigenScore: OOD Detection using Covariance in Diffusion Models, which proposes a new method for out-of-distribution detection using diffusion models. Additionally, papers such as GDiffuSE: Diffusion-based speech enhancement with noise model guidance and FoilDiff: A Hybrid Transformer Backbone for Diffusion-based Modelling of 2D Airfoil Flow Fields demonstrate the application of diffusion models to real-world problems, including speech enhancement and aerodynamic design.

Sources

Bayesian E(3)-Equivariant Interatomic Potential with Iterative Restratification of Many-body Message Passing

Dynamic Meta-Learning for Adaptive XGBoost-Neural Ensembles

Matching the Optimal Denoiser in Point Cloud Diffusion with (Improved) Rotational Alignment

GDiffuSE: Diffusion-based speech enhancement with noise model guidance

FoilDiff: A Hybrid Transformer Backbone for Diffusion-based Modelling of 2D Airfoil Flow Fields

Improved probabilistic regression using diffusion models

HybridFlow: Quantification of Aleatoric and Epistemic Uncertainty with a Single Hybrid Model

Transforming Noise Distributions with Histogram Matching: Towards a Single Denoiser for All

EigenScore: OOD Detection using Covariance in Diffusion Models

Built with on top of