Explainability and Interpretable Models in Machine Learning

The field of machine learning is moving towards developing more explainable and interpretable models. This trend is driven by the need to build trust and transparency in AI decision-making, particularly in high-stakes domains such as healthcare and finance. Recent research has focused on creating models that can provide human-interpretable explanations, enhancing clinicians' trust and usability in medical image diagnosis. The integration of attribute-based explanations and the use of synthetic data to overcome dataset limitations are also being explored. Furthermore, the development of plug-and-play frameworks for explaining complex models, such as network alignment, is gaining attention. Noteworthy papers in this area include:

  • Minimum Data, Maximum Impact, which proposes synthesizing attribute-annotated data using a generative model to enhance explainable models in medical image analysis.
  • NAEx, a plug-and-play framework for explaining network alignment models by identifying key subgraphs and features influencing predictions.
  • An Explainable Machine Learning Framework for Railway Predictive Maintenance, which implements a processing pipeline with online fault prediction and natural language and visual explainability.

Sources

Honey Classification using Hyperspectral Imaging and Machine Learning

Minimum Data, Maximum Impact: 20 annotated samples for explainable lung nodule classification

Unveiling Location-Specific Price Drivers: A Two-Stage Cluster Analysis for Interpretable House Price Predictions

Fast and Accurate Explanations of Distance-Based Classifiers by Uncovering Latent Explanatory Structures

NAEx: A Plug-and-Play Framework for Explaining Network Alignment

An Explainable Machine Learning Framework for Railway Predictive Maintenance using Data Streams from the Metro Operator of Portugal

Built with on top of