The field of machine learning is moving towards developing more explainable and interpretable models. This trend is driven by the need to build trust and transparency in AI decision-making, particularly in high-stakes domains such as healthcare and finance. Recent research has focused on creating models that can provide human-interpretable explanations, enhancing clinicians' trust and usability in medical image diagnosis. The integration of attribute-based explanations and the use of synthetic data to overcome dataset limitations are also being explored. Furthermore, the development of plug-and-play frameworks for explaining complex models, such as network alignment, is gaining attention. Noteworthy papers in this area include:
- Minimum Data, Maximum Impact, which proposes synthesizing attribute-annotated data using a generative model to enhance explainable models in medical image analysis.
- NAEx, a plug-and-play framework for explaining network alignment models by identifying key subgraphs and features influencing predictions.
- An Explainable Machine Learning Framework for Railway Predictive Maintenance, which implements a processing pipeline with online fault prediction and natural language and visual explainability.