Advances in Uncertainty Estimation and Bayesian Learning

The field of machine learning is moving towards a greater emphasis on uncertainty estimation and Bayesian learning. Researchers are developing new methods to quantify and manage uncertainty in complex models, particularly in deep learning. This includes the development of Bayesian neural networks, uncertainty-aware optimization algorithms, and techniques for calibrating uncertainty estimates. Notably, Bayesian learned interatomic potentials (BLIPs) and Twin-Boot, an uncertainty-aware optimization method, have shown promising results in simulation-based chemistry and deep neural networks. Additionally, there is a growing recognition of the importance of probabilistic principles in unifying estimation theory, machine learning, and generative AI. Some noteworthy papers include:

  • BLIPs: Bayesian Learned Interatomic Potentials, which proposes a scalable variational Bayesian framework for training interatomic potentials, and
  • Twin-Boot: Uncertainty-Aware Optimization via Online Two-Sample Bootstrapping, which introduces a resampling-based training procedure for uncertainty estimation and regularization.

Sources

Learning with Confidence

Towards the Next-generation Bayesian Network Classifiers

Calibrated and uncertain? Evaluating uncertainty estimates in binary classification models

BLIPs: Bayesian Learned Interatomic Potentials

Twin-Boot: Uncertainty-Aware Optimization via Online Two-Sample Bootstrapping

Hybrid Least Squares/Gradient Descent Methods for DeepONets

Tutorial on the Probabilistic Unification of Estimation Theory, Machine Learning, and Generative AI

Built with on top of