Advances in Interpretable Machine Learning

The field of machine learning is moving towards increased interpretability, with a focus on developing models that can provide transparent and explainable decisions. Recent research has made significant progress in this area, with the development of new methods and techniques for interpreting complex models. One of the key trends is the use of probabilistic prototypes and graph-based methods to provide insights into the decision-making processes of neural networks. Additionally, there is a growing interest in developing models that can provide multi-granular interpretability, allowing users to understand the reasoning behind model predictions at different levels of abstraction. Overall, the field is shifting towards a more transparent and explainable approach to machine learning, with significant implications for trust, safety, and reliability. Noteworthy papers include: VI3NR, which improves the initialization of implicit neural representations, and From GNNs to Trees, which introduces a novel Tree-like Interpretable Framework for graph classification. These papers demonstrate the potential for increased interpretability and transparency in machine learning models, and highlight the importance of ongoing research in this area.

Sources

Interpretable Affordance Detection on 3D Point Clouds with Probabilistic Prototypes

VI3NR: Variance Informed Initialization for Implicit Neural Representations

Explaining Vision GNNs: A Semantic and Visual Analysis of Graph-based Image Classification

A constraints-based approach to fully interpretable neural networks for detecting learner behaviors

Representation Learning on a Random Lattice

Explanation format does not matter; but explanations do - An Eggsbert study on explaining Bayesian Optimisation tasks

Explanations Go Linear: Interpretable and Individual Latent Encoding for Post-hoc Explainability

In defence of post-hoc explanations in medical AI

Disjunctive and Conjunctive Normal Form Explanations of Clusters Using Auxiliary Information

Dual Explanations via Subgraph Matching for Malware Detection

From GNNs to Trees: Multi-Granular Interpretability for Graph Neural Networks

On the Importance of Gaussianizing Representations

Built with on top of