Advances in Uncertainty Quantification and Robustness in Machine Learning

The field of machine learning is witnessing significant developments in uncertainty quantification and robustness, with a focus on improving the reliability and trustworthiness of models. Recent research has highlighted the importance of accounting for input measurement uncertainty, distribution shifts, and out-of-distribution data in various applications, including medical imaging, land cover classification, and volcanic activity forecasting. Noteworthy papers in this area include one that proposes a new uncertainty quantification method for variational autoencoders, which combines Laplace approximations with stochastic trace estimators to scale gracefully with image dimensionality. Another paper presents a framework for uncertainty-aware Bayesian machine learning modeling of land cover classification, using generative modeling to take account of input measurement uncertainty. A third paper employs Bayesian Regularized Neural Networks to predict volcanic radiative power, achieving superior performance compared to other models. These advancements have the potential to enhance the accuracy and reliability of machine learning models, enabling more effective decision-making under uncertainty.

Sources

Bayesian generative models can flag performance loss, bias, and out-of-distribution image content

Decision from Suboptimal Classifiers: Excess Risk Pre- and Post-Calibration

Uncertainty-aware Bayesian machine learning modelling of land cover classification

Forecasting Volcanic Radiative Power (VPR) at Fuego Volcano Using Bayesian Regularized Neural Network

Robustness quantification and how it allows for reliable classification, even in the presence of distribution shift and for small training sets

Built with on top of