Explainable AI in Medical Imaging and Clinical Decision-Making

The field of explainable AI (XAI) in medical imaging and clinical decision-making is rapidly advancing, with a focus on developing innovative methods and techniques to improve model interpretability and transparency. Recent research has emphasized the importance of integrating domain knowledge and expertise into AI systems to ensure that models are not only accurate but also reliable and trustworthy. One notable trend is the use of hierarchical and graph-based approaches to analyze complex medical data, such as images and patient features, to identify patterns and relationships that can inform clinical decision-making. Another area of focus is the development of explainable AI methods that can detect and mitigate bias in medical datasets, which is critical for ensuring that AI systems are fair and equitable. Noteworthy papers in this area include ModelAuditor, which introduces a self-reflective agent for auditing and improving the reliability of clinical AI models, and ATHENA, which proposes a hierarchical graph neural network framework for personalized classification of subclinical atherosclerosis. These developments have significant implications for clinical practice and patient care, and are expected to continue to drive innovation in the field of XAI in medical imaging and clinical decision-making.

Sources

An autonomous agent for auditing and improving the reliability of clinical AI models

From Motion to Meaning: Biomechanics-Informed Neural Network for Explainable Cardiovascular Disease Identification

Concept-Based Mechanistic Interpretability Using Structured Knowledge Graphs

On the Effectiveness of Methods and Metrics for Explainable AI in Remote Sensing Image Scene Classification

Feature-Guided Neighbor Selection for Non-Expert Evaluation of Model Predictions

Bridging Data Gaps of Rare Conditions in ICU: A Multi-Disease Adaptation Approach for Clinical Prediction

MADPOT: Medical Anomaly Detection with CLIP Adaptation and Partial Optimal Transport

Combining Human-centred Explainability and Explainable AI

Comprehensive Evaluation of Prototype Neural Networks

Explainable Artificial Intelligence in Biomedical Image Analysis: A Comprehensive Survey

Atherosclerosis through Hierarchical Explainable Neural Network Analysis

Bluish Veil Detection and Lesion Classification using Custom Deep Learnable Layers with Explainable Artificial Intelligence (XAI)

Neural Concept Verifier: Scaling Prover-Verifier Games via Concept Encodings

Understanding Dataset Bias in Medical Imaging: A Case Study on Chest X-rays

"So, Tell Me About Your Policy...": Distillation of interpretable policies from Deep Reinforcement Learning agents

Built with on top of