The field of deep learning is moving towards developing more interpretable and robust models. Researchers are exploring techniques to improve the reliability and trustworthiness of deep learning models, particularly in high-stakes applications such as medical imaging and healthcare. One notable direction is the use of self-supervised learning and regularization techniques to promote the use of genuine features over spurious ones. Another area of focus is the development of more effective uncertainty quantification methods, such as conformal prediction, to provide reliable estimates of model uncertainty. Additionally, there is a growing interest in integrating domain-specific knowledge and semantics into deep learning models to improve their performance and interpretability. Noteworthy papers in this area include: AIM, which proposes a self-supervised masking method to improve model interpretability and robustness. On the notion of missingness for path attribution explainability methods in medical settings, which introduces a counterfactual-guided approach to select medically meaningful baselines for explainability methods. Clinical semantics for lung cancer prediction, which integrates domain-specific semantic information into deep learning models to improve lung cancer onset prediction.