Explainability and Transparency in AI Systems

The field of Artificial Intelligence is moving towards a direction where explainability and transparency are becoming increasingly important. Researchers are focusing on developing methods and techniques to provide insights into the decision-making processes of AI systems, making them more trustworthy and reliable. This trend is evident in the development of explainable AI frameworks, evaluation protocols, and visualization tools. The goal is to enable domain experts and users to understand the reasoning behind AI-driven decisions, which is crucial in high-stakes applications such as healthcare and climate science. Noteworthy papers in this regard include the proposal of a framework for systematic assessment and reporting of explainable AI features, which provides a comprehensive overview of the evaluation protocols and metrics for XAI methods. Another significant contribution is the introduction of a cognitive model for understanding explanations, which highlights the importance of cognitive accessibility and perceptual optimization in XAI. Additionally, the development of low-code and no-code strategies for climate dashboards and the application of explainable AI in real-world settings, such as the detection of AI-generated videos and the diagnosis of Age-related Macular Degeneration, demonstrate the potential of XAI to drive positive impact in various domains.

Sources

xInv: Explainable Optimization of Inverse Problems

A Tale of Two Systems: Characterizing Architectural Complexity on Machine Learning-Enabled Systems

ALEA IACTA EST: A Declarative Domain-Specific Language for Manually Performable Random Experiments

An Explainable AI Framework for Dynamic Resource Management in Vehicular Network Slicing

Recommendations and Reporting Checklist for Rigorous & Transparent Human Baselines in Model Evaluations

A Systematic Review of User-Centred Evaluation of Explainable AI in Healthcare

Evaluating Explainability: A Framework for Systematic Assessment and Reporting of Explainable AI Features

Mxplainer: Explain and Learn Insights by Imitating Mahjong Agents

Low-code to fight climate change: the Climaborough project

Towards Desiderata-Driven Design of Visual Counterfactual Explainers

See What I Mean? CUE: A Cognitive Model of Understanding Explanations

WebXAII: an open-source web framework to study human-XAI interaction

DAVID-XR1: Detecting AI-Generated Videos with Explainable Reasoning

CACTUS as a Reliable Tool for Early Classification of Age-related Macular Degeneration

Unifying VXAI: A Systematic Review and Framework for the Evaluation of Explainable AI

Built with on top of