Advances in Trustworthy Machine Learning

The field of machine learning is moving towards developing more trustworthy and reliable models. This is evident in the growing focus on uncertainty quantification, selective prediction, and model calibration. Researchers are exploring new methods to improve the accuracy and reliability of machine learning models, such as using conformal prediction, uncertainty-driven reliability, and stochastic masking. These approaches aim to provide more accurate and transparent predictions, which is crucial for high-stakes applications. Noteworthy papers in this area include: DSperse, which proposes a framework for targeted verification in zero-knowledge machine learning, allowing for scalable and flexible verification strategies. Uncertainty-Driven Reliability, which investigates how uncertainty estimation can enhance the safety and trustworthiness of machine learning systems, and proposes a lightweight post-hoc abstention method that works across tasks and avoids the cost of deep ensembles.

Sources

DSperse: A Framework for Targeted Verification in Zero-Knowledge Machine Learning

Conformal Prediction and Trustworthy AI

On the Limits of Selective AI Prediction: A Case Study in Clinical Decision Making

Uncertainty-Driven Reliability: Selective Prediction and Trustworthy Deployment in Modern Machine Learning

Unequal Uncertainty: Rethinking Algorithmic Interventions for Mitigating Discrimination from AI

Beyond Predictions: A Study of AI Strength and Weakness Transparency Communication on Human-AI Collaboration

Deep Neural Network Calibration by Reducing Classifier Shift with Stochastic Masking

Built with on top of