The field of machine learning is moving towards developing more trustworthy and reliable models. This is evident in the growing focus on uncertainty quantification, selective prediction, and model calibration. Researchers are exploring new methods to improve the accuracy and reliability of machine learning models, such as using conformal prediction, uncertainty-driven reliability, and stochastic masking. These approaches aim to provide more accurate and transparent predictions, which is crucial for high-stakes applications. Noteworthy papers in this area include: DSperse, which proposes a framework for targeted verification in zero-knowledge machine learning, allowing for scalable and flexible verification strategies. Uncertainty-Driven Reliability, which investigates how uncertainty estimation can enhance the safety and trustworthiness of machine learning systems, and proposes a lightweight post-hoc abstention method that works across tasks and avoids the cost of deep ensembles.