Explainable AI in Medical Diagnosis

The field of medical diagnosis is witnessing a significant shift towards explainable AI, with a focus on developing transparent and trustworthy models that can provide accurate predictions and insights into the decision-making process. Recent studies have demonstrated the potential of multimodal deep learning frameworks, generative adversarial networks, and prototype learning models in detecting lung diseases, classifying skin lesions, and estimating ejection fractions. These innovative approaches have achieved high accuracy and outperformed traditional methods, while also providing interpretable explanations and visualizations of the results. Notably, the integration of explainable AI techniques, such as Grad-CAM, SHAP, and LIME, has enabled clinicians to understand the underlying factors driving the predictions and build trust in the models. The applications of these models are vast, ranging from telemedicine and point-of-care diagnostics to real-world respiratory screening and continuous neurocognitive monitoring. Some noteworthy papers in this regard include: The study on explainable multimodal deep learning for automatic lung-disease detection, which achieved strong generalization and outperformed ablated variants. The proposal of ProtoEFNet, a novel video-based prototype learning model for continuous ejection fraction regression, which provided clinically relevant insights and achieved accuracy on par with non-interpretable counterparts.

Sources

Explainable Multi-Modal Deep Learning for Automatic Detection of Lung Diseases from Respiratory Audio Signals

XAI-Driven Skin Disease Classification: Leveraging GANs to Augment ResNet-50 Performance

SAND Challenge: Four Approaches for Dysartria Severity Classification

ProtoEFNet: Dynamic Prototype Learning for Inherently Interpretable Ejection Fraction Estimation in Echocardiography

A Hybrid Deep Learning Framework with Explainable AI for Lung Cancer Classification with DenseNet169 and SVM

State Space Models for Bioacoustics: A comparative Evaluation with Transformers

Standard audiogram classification from loudness scaling data using unsupervised, supervised, and explainable machine learning techniques

Toward Continuous Neurocognitive Monitoring: Integrating Speech AI with Relational Graph Transformers for Rare Neurological Diseases

Built with on top of