Uncertainty Quantification in Deep Learning

The field of deep learning is moving towards a greater emphasis on uncertainty quantification, with a focus on developing methods that can provide reliable and accurate estimates of uncertainty in neural network predictions. This trend is driven by the need for more trustworthy and transparent models, particularly in high-stakes applications such as healthcare and autonomous systems. Recent research has explored various approaches to uncertainty quantification, including Bayesian neural networks, deep ensembles, and Monte Carlo dropout. These methods have shown promising results in improving the calibration and reliability of neural network predictions. Notable papers in this area include NeuralSurv, which introduces a Bayesian uncertainty quantification framework for deep survival analysis, and SurvUnc, which proposes a meta-model based framework for post-hoc uncertainty quantification in survival analysis. Enhancing Monte Carlo Dropout Performance for Uncertainty Quantification is also noteworthy, as it introduces innovative frameworks that improve the reliability of uncertainty quantification. Last Layer Empirical Bayes is another significant contribution, which instantiates a learnable prior as a normalizing flow and shows promising results in uncertainty quantification.

Sources

Approximation and Generalization Abilities of Score-based Neural Network Generative Models for Sub-Gaussian Distributions

NeuralSurv: Deep Survival Analysis with Bayesian Uncertainty Quantification

Uncertainty quantification with approximate variational learning for wearable photoplethysmography prediction tasks

SurvUnc: A Meta-Model Based Uncertainty Quantification Framework for Survival Analysis

Bayesian Ensembling: Insights from Online Optimization and Empirical Bayes

Enhancing Monte Carlo Dropout Performance for Uncertainty Quantification

Last Layer Empirical Bayes

Built with on top of