The field of machine learning is moving towards a greater emphasis on calibration and uncertainty estimation, with a focus on developing more robust and trustworthy models. Researchers are exploring new methods for evaluating and improving model calibration, including the use of equivariant networks, utility-aware calibration, and local calibration techniques. These advancements have the potential to improve model performance in a variety of applications, including safety-critical domains and high-stakes settings such as healthcare. Notable papers in this area include:
- On Uncertainty Calibration for Equivariant Functions, which presents a theoretical framework for understanding the relationship between equivariance and uncertainty estimation.
- Scalable Utility-Aware Multiclass Calibration, which proposes a general framework for evaluating multiclass calibration relative to a specific utility function.
- Multiclass Local Calibration With the Jensen-Shannon Distance, which introduces a local perspective on multiclass calibration and proposes a practical method for enhancing local calibration in neural networks.