The field of machine learning is moving towards a greater emphasis on uncertainty quantification and robustness. Researchers are developing new methods to quantify and communicate uncertainty in predictive models, particularly in high-stakes applications where reliability is crucial. This includes work on reject-option prediction, which allows models to abstain when uncertainty is high, and novel approaches to constructing unlearnable examples that can prevent models from learning sensitive information. Additionally, there is a growing interest in analyzing the theoretical behavior of popular algorithms, such as the Expectation-Maximization algorithm, to provide non-asymptotic guarantees and improve their performance. Noteworthy papers include:
- Epistemic Reject Option Prediction, which introduces a principled framework for learning predictors that can identify inputs for which the training data is insufficient to make reliable decisions.
- Towards Provably Unlearnable Examples via Bayes Error Optimisation, which proposes a novel approach to constructing unlearnable examples by maximising the Bayes error.
- Practical Global and Local Bounds in Gaussian Process Regression via Chaining, which provides a chaining-based framework for estimating upper and lower bounds on the expected extreme values over unseen data.