Explainability and Causality in AI Systems

The field of artificial intelligence is moving towards a greater emphasis on explainability and causality, with a focus on developing techniques that can provide insights into the decision-making processes of complex models. This trend is driven by the need for more transparent and trustworthy AI systems, particularly in high-stakes applications such as healthcare and finance. Recent research has made significant progress in this area, with the development of new methods for explaining deep learning models, discovering causal relationships in data, and learning from simulators. Notably, the use of partial orders, exemplars, and natural language rules has shown promise in providing more accurate and interpretable explanations. Furthermore, the integration of causal machine learning and meta-learning has enabled the estimation of individualized treatment effects and the development of more personalized AI systems. Overall, the field is advancing rapidly, with a growing recognition of the importance of explainability and causality in AI systems. Noteworthy papers include: Revealing Inherent Concurrency in Event Data, which introduces a novel algorithm for process discovery that preserves inherent concurrency. GnnXemplar, which proposes a global explainer for Graph Neural Networks that uses natural language rules to explain predictions. Causal Machine Learning for Surgical Interventions, which develops a multi-task meta-learning framework for estimating individualized treatment effects in surgical decision-making.

Sources

Revealing Inherent Concurrency in Event Data: A Partial Order Approach to Process Discovery

Generating Part-Based Global Explanations Via Correspondence

GnnXemplar: Exemplars to Explanations - Natural Language Rules for Global GNN Interpretability

xAI-CV: An Overview of Explainable Artificial Intelligence in Computer Vision

Learning From Simulators: A Theory of Simulation-Grounded Learning

Glass-Box Analysis for Computer Systems: Transparency Index, Shapley Attribution, and Markov Models of Branch Prediction

Towards Causal Representation Learning with Observable Sources as Auxiliaries

Towards Practical Multi-label Causal Discovery in High-Dimensional Event Sequences via One-Shot Graph Aggregation

Causal Machine Learning for Surgical Interventions

Practical do-Shapley Explanations with Estimand-Agnostic Causal Inference

Built with on top of