Interpretable Machine Learning for Decision Making

The field of machine learning is moving towards developing more interpretable models that can facilitate decision making in various domains such as football tactics, medical image analysis, and healthcare. Researchers are exploring innovative approaches to integrate interpretability into machine learning models, including the use of low-dimensional modeling, uncertainty-aware variational information pursuit, and hybrid models that combine symbolic reasoning with conventional machine learning. These advancements have the potential to increase trust and adoption of machine learning models in high-stakes domains. Noteworthy papers in this area include: Uncertainty-Aware Variational Information Pursuit for Interpretable Medical Image Analysis, which introduces a novel framework for uncertainty-aware reasoning in medical image analysis. Interpretable Hybrid Machine Learning Models Using FOLD-R++ and Answer Set Programming, which proposes a hybrid approach that integrates symbolic reasoning with conventional machine learning to achieve high interpretability without sacrificing accuracy.

Sources

Interpretable Low-Dimensional Modeling of Spatiotemporal Agent States for Decision Making in Football Tactics

Uncertainty-Aware Variational Information Pursuit for Interpretable Medical Image Analysis

Interpretable Hybrid Machine Learning Models Using FOLD-R++ and Answer Set Programming

Argumentative Ensembling for Robust Recourse under Model Multiplicity

Interpretable Representation Learning for Additive Rule Ensembles

Built with on top of