Advances in Interpretable and Robust Deep Learning

The field of deep learning is moving towards developing more interpretable and robust models. Researchers are exploring techniques to improve the reliability and trustworthiness of deep learning models, particularly in high-stakes applications such as medical imaging and healthcare. One notable direction is the use of self-supervised learning and regularization techniques to promote the use of genuine features over spurious ones. Another area of focus is the development of more effective uncertainty quantification methods, such as conformal prediction, to provide reliable estimates of model uncertainty. Additionally, there is a growing interest in integrating domain-specific knowledge and semantics into deep learning models to improve their performance and interpretability. Noteworthy papers in this area include: AIM, which proposes a self-supervised masking method to improve model interpretability and robustness. On the notion of missingness for path attribution explainability methods in medical settings, which introduces a counterfactual-guided approach to select medically meaningful baselines for explainability methods. Clinical semantics for lung cancer prediction, which integrates domain-specific semantic information into deep learning models to improve lung cancer onset prediction.

Sources

AIM: Amending Inherent Interpretability via Self-Supervised Masking

Human Digital Twin: Data, Models, Applications, and Challenges

In-hoc Concept Representations to Regularise Deep Learning in Medical Imaging

Effect of Data Augmentation on Conformal Prediction for Diabetic Retinopathy

Personalized Counterfactual Framework: Generating Potential Outcomes from Wearable Data

On the notion of missingness for path attribution explainability methods in medical settings: Guiding the selection of medically meaningful baselines

Clinical semantics for lung cancer prediction

Saving for the future: Enhancing generalization via partial logic regularization

Built with on top of