Advances in Explainability and Whole Slide Image Analysis

The field of explainability and whole slide image analysis is rapidly advancing, with a focus on developing innovative methods for interpreting model decisions and understanding complex medical images. Recent research has emphasized the importance of fine-grained interpretability, counterfactual explanations, and spatial information bottleneck techniques to improve model trustworthiness and robustness. Noteworthy papers in this area include: Dynamic Residual Encoding with Slide-Level Contrastive Learning for End-to-End Whole Slide Image Representation, which proposes a novel method for whole slide image representation. Contrastive Integrated Gradients: A Feature Attribution-Based Method for Explaining Whole Slide Image Classification, which introduces a novel attribution method for whole slide image classification. Spatial Information Bottleneck for Interpretable Visual Recognition, which proposes a novel understanding framework for gradient-based attribution from an information-theoretic perspective.

Sources

Dynamic Residual Encoding with Slide-Level Contrastive Learning for End-to-End Whole Slide Image Representation

Towards Fine-Grained Interpretability: Counterfactual Explanations for Misclassification with Saliency Partition

Rethinking Explanation Evaluation under the Retraining Scheme

Contrastive Integrated Gradients: A Feature Attribution-Based Method for Explaining Whole Slide Image Classification

SENCA-st: Integrating Spatial Transcriptomics and Histopathology with Cross Attention Shared Encoder for Region Identification in Cancer Pathology

Where did you get that? Towards Summarization Attribution for Analysts

Spatial Information Bottleneck for Interpretable Visual Recognition

Distribution-Based Feature Attribution for Explaining the Predictions of Any Classifier

Built with on top of