Explainable Medical Imaging Diagnostics

The field of medical imaging diagnostics is shifting towards the development of more transparent and interpretable models. Recent research has focused on leveraging vision-language models, multimodal learning, and explainable artificial intelligence (XAI) techniques to improve the accuracy and trustworthiness of diagnostic systems. These approaches aim to provide clinicians with not only accurate diagnoses but also detailed explanations and reports, thereby enhancing the adoption of AI systems in high-stakes medical settings. Noteworthy papers in this area include X-Ray-CoT, which achieves competitive performance and generates high-quality, explainable reports for chest X-ray diagnosis. Another significant contribution is Med-CTX, a fully transformer-based multimodal framework for explainable breast cancer ultrasound segmentation, which achieves state-of-the-art performance and provides clinically grounded explanations. XDR-LVLM is also notable for its explainable vision-language large model for diabetic retinopathy diagnosis, which achieves high precision and generates comprehensive diagnostic reports.

Sources

X-Ray-CoT: Interpretable Chest X-ray Diagnosis with Vision-Language Models via Chain-of-Thought Reasoning

Eyes on the Image: Gaze Supervised Multimodal Learning for Chest X-ray Diagnosis and Report Generation

A Fully Transformer Based Multimodal Framework for Explainable Cancer Image Segmentation Using Radiology Reports

XAI-Driven Spectral Analysis of Cough Sounds for Respiratory Disease Characterization

XDR-LVLM: An Explainable Vision-Language Large Model for Diabetic Retinopathy Diagnosis

RadReason: Radiology Report Evaluation Metric with Reasons and Sub-Scores

Built with on top of