The field of explainable AI (XAI) in medical imaging and clinical decision-making is rapidly advancing, with a focus on developing innovative methods and techniques to improve model interpretability and transparency. Recent research has emphasized the importance of integrating domain knowledge and expertise into AI systems to ensure that models are not only accurate but also reliable and trustworthy. One notable trend is the use of hierarchical and graph-based approaches to analyze complex medical data, such as images and patient features, to identify patterns and relationships that can inform clinical decision-making. Another area of focus is the development of explainable AI methods that can detect and mitigate bias in medical datasets, which is critical for ensuring that AI systems are fair and equitable. Noteworthy papers in this area include ModelAuditor, which introduces a self-reflective agent for auditing and improving the reliability of clinical AI models, and ATHENA, which proposes a hierarchical graph neural network framework for personalized classification of subclinical atherosclerosis. These developments have significant implications for clinical practice and patient care, and are expected to continue to drive innovation in the field of XAI in medical imaging and clinical decision-making.
Explainable AI in Medical Imaging and Clinical Decision-Making
Sources
From Motion to Meaning: Biomechanics-Informed Neural Network for Explainable Cardiovascular Disease Identification
On the Effectiveness of Methods and Metrics for Explainable AI in Remote Sensing Image Scene Classification
Bridging Data Gaps of Rare Conditions in ICU: A Multi-Disease Adaptation Approach for Clinical Prediction