The field of conformal prediction and uncertainty quantification is moving towards developing more robust and efficient methods for improving the trustworthiness of neural networks. Researchers are exploring new approaches to adapt conformal prediction to distribution shifts and adversarial attacks, ensuring valid coverage and controlling risk in scenarios where standard conformal prediction fails. Additionally, there is a growing interest in enhancing the reliability of deep learning models through post-hoc uncertainty quantification techniques, such as conflict-aware evidential deep learning and test-time resource utilization. These innovations have the potential to significantly improve the performance and trustworthiness of neural networks in high-stakes applications. Noteworthy papers include: Efficient Robust Conformal Prediction via Lipschitz-Bounded Networks, which proposes a new method for efficient and precise estimation of robust CP sets. Conformal Prediction Adaptive to Unknown Subpopulation Shifts, which develops new methods that provably adapt conformal prediction to unknown subpopulation shifts. Quantifying Adversarial Uncertainty in Evidential Deep Learning using Conflict Resolution, which introduces a lightweight post-hoc uncertainty quantification approach that mitigates issues with evidential deep learning. TRUST: Test-time Resource Utilization for Superior Trustworthiness, which proposes a novel test-time optimization method for producing more reliable confidence estimates. Beyond Overconfidence: Foundation Models Redefine Calibration in Deep Neural Networks, which presents a comprehensive investigation into the calibration behavior of foundation models.