The field of explainability and whole slide image analysis is rapidly advancing, with a focus on developing innovative methods for interpreting model decisions and understanding complex medical images. Recent research has emphasized the importance of fine-grained interpretability, counterfactual explanations, and spatial information bottleneck techniques to improve model trustworthiness and robustness. Noteworthy papers in this area include: Dynamic Residual Encoding with Slide-Level Contrastive Learning for End-to-End Whole Slide Image Representation, which proposes a novel method for whole slide image representation. Contrastive Integrated Gradients: A Feature Attribution-Based Method for Explaining Whole Slide Image Classification, which introduces a novel attribution method for whole slide image classification. Spatial Information Bottleneck for Interpretable Visual Recognition, which proposes a novel understanding framework for gradient-based attribution from an information-theoretic perspective.
Advances in Explainability and Whole Slide Image Analysis
Sources
Dynamic Residual Encoding with Slide-Level Contrastive Learning for End-to-End Whole Slide Image Representation
Towards Fine-Grained Interpretability: Counterfactual Explanations for Misclassification with Saliency Partition
Contrastive Integrated Gradients: A Feature Attribution-Based Method for Explaining Whole Slide Image Classification