The field of machine learning is moving towards a greater emphasis on uncertainty quantification, with a focus on developing methods that can provide reliable and robust estimates of uncertainty in complex systems. This is driven by the need for trustworthy and transparent decision-making in high-stakes applications, such as healthcare, finance, and autonomous systems. Recent research has explored the use of conformal prediction, stochastic operator networks, and distributionally robust optimization to address this challenge. These methods have shown promise in providing accurate and reliable uncertainty estimates, even in the presence of distribution shifts and data contamination. Notably, the integration of conformal prediction with deep learning models has emerged as a promising approach, enabling the provision of valid prediction intervals with coverage guarantees.
Some noteworthy papers in this area include: The Distribution-Free Uncertainty-Aware Virtual Sensing via Conformalized Neural Operators paper, which introduces a framework for transforming neural operator-based virtual sensing with calibrated, distribution-free prediction intervals. The Conformal Prediction for Privacy-Preserving Machine Learning paper, which investigates the integration of Conformal Prediction with supervised learning on deterministically encrypted data.