Interpretable Machine Learning Models

The field of machine learning is moving towards the development of more interpretable models, with a focus on understanding how features participate in the decision-making process. This is particularly important in high-stakes fields such as healthcare, where model interpretability is crucial for making informed decisions. Recent research has introduced new models and frameworks that prioritize interpretability, such as the use of shape functions and counterfactual explanations. These models have shown promising results in terms of predictive performance and have the potential to be used in a variety of applications, including image-to-music generation, graph classification, and social recommendation. Notable papers in this area include the introduction of Multiplicative-Additive Constrained Models, which improve upon existing models by disentangling interactive and independent terms, and the development of MUSE-Explainer, which provides clear and human-friendly explanations for music graph classification models. Additionally, the SoREX framework offers a self-explanatory approach to social recommendation, and GDLNN combines programming languages and neural networks for accurate and easy-to-explain graph classification.

Sources

Multiplicative-Additive Constrained Models:Toward Joint Visualization of Interactive and Independent Effects

Zero-Effort Image-to-Music Generation: An Interpretable RAG-based VLM Approach

Graph Mixing Additive Networks

MUSE-Explainer: Counterfactual Explanations for Symbolic Music Graph Classification Models

SoREX: Towards Self-Explainable Social Recommendation with Relevant Ego-Path Extraction

GDLNN: Marriage of Programming Language and Neural Networks for Accurate and Easy-to-Explain Graph Classification

Built with on top of