The field of machine learning is moving towards developing more reliable and trustworthy models, with a focus on uncertainty quantification and robust inference. Recent developments have highlighted the importance of providing informative and adaptive prediction intervals, as well as ensuring coverage guarantees under distribution shift. Notably, the integration of conformal prediction and mixture of experts has shown promise in delivering trustworthy and tight uncertainties. Furthermore, there is a growing interest in developing methods that can handle out-of-distribution data and provide reliable estimates of treatment effects over time.
Some noteworthy papers in this area include: Adaptive Individual Uncertainty under Out-Of-Distribution Shift with Expert-Routed Conformal Prediction, which introduces a novel uncertainty quantification method that provides per-sample uncertainty with reliable coverage guarantee. CONFEX: Uncertainty-Aware Counterfactual Explanations with Conformal Guarantees, which proposes a method for generating uncertainty-aware counterfactual explanations using Conformal Prediction and Mixed-Integer Linear Programming.