Advances in Uncertainty Quantification and Reliable Inference

The field of machine learning is moving towards developing more reliable and trustworthy models, with a focus on uncertainty quantification and robust inference. Recent developments have highlighted the importance of providing informative and adaptive prediction intervals, as well as ensuring coverage guarantees under distribution shift. Notably, the integration of conformal prediction and mixture of experts has shown promise in delivering trustworthy and tight uncertainties. Furthermore, there is a growing interest in developing methods that can handle out-of-distribution data and provide reliable estimates of treatment effects over time.

Some noteworthy papers in this area include: Adaptive Individual Uncertainty under Out-Of-Distribution Shift with Expert-Routed Conformal Prediction, which introduces a novel uncertainty quantification method that provides per-sample uncertainty with reliable coverage guarantee. CONFEX: Uncertainty-Aware Counterfactual Explanations with Conformal Guarantees, which proposes a method for generating uncertainty-aware counterfactual explanations using Conformal Prediction and Mixed-Integer Linear Programming.

Sources

Adaptive Individual Uncertainty under Out-Of-Distribution Shift with Expert-Routed Conformal Prediction

Adversary-Free Counterfactual Prediction via Information-Regularized Representations

Reliable Inference in Edge-Cloud Model Cascades via Conformal Alignment

Functional Distribution Networks (FDN)

Neural Variational Dropout Processes

Overlap-weighted orthogonal meta-learner for treatment effect estimation over time

CoSense-LLM: Semantics at the Edge with Cost- and Uncertainty-Aware Cloud-Edge Cooperation

CONFEX: Uncertainty-Aware Counterfactual Explanations with Conformal Guarantees

Built with on top of