Explainable AI in Medical Imaging and Beyond

The field of explainable AI is rapidly advancing, with a focus on developing innovative models that provide transparent and trustworthy results. Recent developments have seen the integration of neural and symbolic reasoning, enabling the creation of more interpretable and accurate models. The use of attention mechanisms, such as Grad-CAM, has improved the explainability of vision transformers, while the introduction of concept-based models has enhanced the interpretability of language models. Furthermore, the development of multimodal XAI frameworks has facilitated the detection and mitigation of biases in deep neural networks. Noteworthy papers in this area include: TinyViT-Batten, which proposes a few-shot vision transformer for early Batten disease detection, achieving high accuracy and explainability. CLMN, which introduces a neural-symbolic framework for concept-based language models, improving both performance and interpretability. ViConEx-Med, which presents a novel transformer-based framework for visual concept explainability in medical image analysis, outperforming prior concept-based models. Evaluating the Explainability of Vision Transformers in Medical Imaging, which evaluates the explainability of different vision transformer architectures and pre-training strategies, highlighting the importance of Grad-CAM for faithful and localized explanations. A Multimodal XAI Framework for Trustworthy CNNs and Bias Detection in Deep Representation Learning, which proposes a novel multimodal XAI framework for bias detection and mitigation, achieving high classification accuracy and explanation fidelity. Hybrid Interval Type-2 Mamdani-TSK Fuzzy System for Regression Analysis, which presents a novel fuzzy regression method combining the interpretability of Mamdani systems with the precision of TSK models, demonstrating state-of-the-art performance on benchmark datasets. Symbol Grounding in Neuro-Symbolic AI, which provides a gentle introduction to reasoning shortcuts in neuro-symbolic AI, discussing their causes and consequences, and reviewing methods for dealing with them. DEXTER, which introduces a data-free framework for generating global, textual explanations of visual classifiers, enabling natural language explanation about a classifier's decision process without access to training data or ground-truth labels.

Sources

TinyViT-Batten: Few-Shot Vision Transformer with Explainable Attention for Early Batten-Disease Detection on Pediatric MRI

CLMN: Concept based Language Models via Neural Symbolic Reasoning

ViConEx-Med: Visual Concept Explainability via Multi-Concept Token Transformer for Medical Image Analysis

Evaluating the Explainability of Vision Transformers in Medical Imaging

A Multimodal XAI Framework for Trustworthy CNNs and Bias Detection in Deep Representation Learning

Hybrid Interval Type-2 Mamdani-TSK Fuzzy System for Regression Analysis

Symbol Grounding in Neuro-Symbolic AI: A Gentle Introduction to Reasoning Shortcuts

DEXTER: Diffusion-Guided EXplanations with TExtual Reasoning for Vision Models

Built with on top of