Uncertainty Quantification in Deep Learning

The field of deep learning is moving towards a greater emphasis on uncertainty quantification, with a focus on developing methods that can accurately predict and quantify uncertainty in model outputs. This is being driven by the need for more reliable and robust models, particularly in applications where uncertainty can have significant consequences. Researchers are exploring a range of approaches, including Bayesian neural networks, Gaussian processes, and Laplace approximations, to improve the accuracy and reliability of uncertainty estimates. Notable papers in this area include the proposal of a Semantic-Aware Gaussian Process calibration framework, which enhances interpretability and effectiveness in assessing predictive reliability, and the introduction of a confidence optimization probabilistic encoding method, which improves distance reliability and enhances representation learning. Another interesting development is the Distributional Uncertainty for Out-of-Distribution Detection method, which jointly models distributional uncertainty and identifies OoD and misclassified regions using free energy.

Sources

Single- to multi-fidelity history-dependent learning with uncertainty quantification and disentanglement: application to data-driven constitutive modeling

Feature Bank Enhancement for Distance-based Out-of-Distribution Detection

An Uncertainty-aware DETR Enhancement Framework for Object Detection

Semantic-Aware Gaussian Process Calibration with Structured Layerwise Kernels for Deep Neural Networks

Confidence Optimization for Probabilistic Encoding

laplax -- Laplace Approximations with JAX

Distributional Uncertainty for Out-of-Distribution Detection

Built with on top of