Explainable AI Advancements

The field of explainable AI (XAI) is moving towards more scalable, interpretable, and functional approaches. Recent developments focus on overcoming practical limitations, such as the exponential cost of reasoning over feature subsets and the reduced expressiveness of summarizing effects as single scalar values. Notable advancements include the development of frameworks that provide structured functional perspectives, enabling novel diagnostics and quantitative measurements of specific interactions. Additionally, there is a growing interest in evaluating XAI approaches, with a focus on proposing formal and precise measures of importance and developing new tools that outperform existing ones. Overall, the field is advancing towards more effective and efficient methods for explaining AI models, with significant implications for various applications, including public safety analytics and data economics. Noteworthy papers include: STRIDE, which presents a scalable framework for XAI via subset-free functional decomposition, and Evaluation of Black-Box XAI Approaches, which proposes a formal measure of importance and demonstrates the superiority of a new XAI tool.

Sources

Value bounds and Convergence Analysis for Averages of LRP attributions

STRIDE: Scalable and Interpretable XAI via Subset-Free Functional Decomposition

Explaining Tournament Solutions with Minimal Supports

Evaluation of Black-Box XAI Approaches for Predictors of Values of Boolean Formulae

A Dimensionality-Reduced XAI Framework for Roundabout Crash Severity Insights

Exploring Training Data Attribution under Limited Access Constraints

Built with on top of