Advances in Learning under Distribution Shift and Uncertainty

The field of machine learning is witnessing significant developments in handling distribution shift and uncertainty. Researchers are exploring innovative methods to address the challenges posed by real-world data, which often exhibits differences in distribution between training and testing environments. One notable direction is the investigation of interconnections between calibration, quantification, and classifier accuracy prediction under dataset shift conditions. Another area of focus is the development of online decision-focused learning algorithms that can adapt to changing objective functions and data distributions over time. Additionally, there is a growing interest in designing surrogate losses that can handle non-polyhedral and differentiable functions, as well as active learning frameworks for multi-group mean estimation. Noteworthy papers in this area include:

  • On the Interconnections of Calibration, Quantification, and Classifier Accuracy Prediction under Dataset Shift, which proves the equivalence of these tasks through mutual reduction and proposes new methods for each problem.
  • Online Decision-Focused Learning, which investigates decision-focused learning in dynamic environments and proposes a practical online algorithm with bounds on the expected dynamic regret.
  • Consistency Conditions for Differentiable Surrogate Losses, which gives the first results on the equivalence of indirect elicitation and calibration for non-polyhedral surrogates and constructs a counter-example showing that this equivalence fails in higher dimensions.

Sources

On the Interconnections of Calibration, Quantification, and Classifier Accuracy Prediction under Dataset Shift

Online Decision-Focused Learning

Consistency Conditions for Differentiable Surrogate Losses

An active learning framework for multi-group mean estimation

When to retrain a machine learning model

Know When to Abstain: Optimal Selective Classification with Likelihood Ratios

Self-Boost via Optimal Retraining: An Analysis via Approximate Message Passing

Group Distributionally Robust Optimization with Flexible Sample Queries

Adaptive Estimation and Learning under Temporal Distribution Shift

Persuasive Prediction via Decision Calibration

Multivariate Latent Recalibration for Conditional Normalizing Flows

Contextual Learning for Stochastic Optimization

Built with on top of