The field of medical imaging is moving towards developing more robust and reliable models, with a focus on addressing the challenges of distributional shifts, uncertainty, and heterogeneity in data. Researchers are exploring new approaches to improve the performance of deep learning models in real-world deployment, including the use of probabilistic guarantees, distributionally robust training, and graph-radiomic learning. These innovations have the potential to enhance the accuracy and trustworthiness of medical imaging models, particularly in safety-critical applications. Notable papers in this area include: Probabilistic Conformal Coverage Guarantees in Small-Data Settings, which introduces a plug-and-play adjustment to conformal significance levels to provide probabilistic guarantees. NeuroRAD-FM: A Foundation Model for Neuro-Oncology with Distributionally Robust Training, which develops a neuro-oncology specific foundation model with a distributionally robust loss function to improve cross-institution generalization. Graph-Radiomic Learning Descriptor to Characterize Imaging Heterogeneity in Confounding Tumor Pathologies, which presents a new descriptor for characterizing intralesional heterogeneity on clinical MRI scans. Probabilistic Runtime Verification, Evaluation and Risk Assessment of Visual Deep Learning Systems, which proposes a novel methodology for verifying, evaluating, and assessing the risk of deep learning systems. Efficient Cell Painting Image Representation Learning via Cross-Well Aligned Masked Siamese Network, which presents a novel representation learning framework that aligns embeddings of cells subjected to the same perturbation across different wells. Anomaly Detection by Clustering DINO Embeddings using a Dirichlet Process Mixture, which leverages informative embeddings from foundational models for unsupervised anomaly detection in medical imaging.