Explainable AI for Safety-Critical Applications

The field of explainable AI is moving towards developing innovative methods for explaining complex models in safety-critical applications such as autonomous driving, healthcare, and finance. Recent developments have focused on improving the interpretability and transparency of deep learning models, with a particular emphasis on counterfactual explanations and model-agnostic explainability techniques. Notable papers in this area include the proposal of a novel guided reverse process for categorical features based on an approximation to the Gumbel-softmax distribution, and the introduction of a latent diffusion model for video counterfactual explanations. These advancements have the potential to increase trust and reliability in AI systems, and to facilitate their deployment in high-stakes domains.

Sources

Tabular Diffusion Counterfactual Explanations

XAI-Driven Machine Learning System for Driving Style Recognition and Personalized Recommendations

An Explainable Gaussian Process Auto-encoder for Tabular Data

Explaining What Machines See: XAI Strategies in Deep Object Detection Models

Predicting NCAP Safety Ratings: An Analysis of Vehicle Characteristics and ADAS Features Using Machine Learning

Rashomon in the Streets: Explanation Ambiguity in Scene Understanding

LD-ViCE: Latent Diffusion Model for Video Counterfactual Explanations

Built with on top of