Explainable AI Advances in Education and Decision-Making

The field of Explainable AI (XAI) is rapidly evolving, with a focus on enhancing trust and transparency in AI applications. Recent developments have centered on addressing the challenges of XAI in education, including the lack of standardized definitions and the need for more effective explanation techniques. Researchers are exploring innovative methods to improve the interpretability of AI models, such as comparative explanations and uncertainty propagation. Additionally, there is a growing emphasis on user-centered AI approaches, including adaptive GenAI-driven visualization tools and explanation-driven interventions for customizing black-box AI models. These advances have the potential to significantly impact decision-making in high-stakes domains, such as healthcare and education. Noteworthy papers include: Uncertainty Propagation in XAI, which introduces a unified framework for quantifying and interpreting uncertainty in XAI, and V-CEM, which leverages variational inference to improve intervention responsiveness in concept-based models.

Sources

Systematic Literature Review: Explainable AI Definitions and Challenges in Education

From Questions to Insights: Exploring XAI Challenges Reported on Stack Overflow Questions

Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical Estimators

Comparative Explanations: Explanation Guided Decision Making for Human-in-the-Loop Preference Selection

V-CEM: Bridging Performance and Intervenability in Concept-based Models

User-Centered AI for Data Exploration -- Rethinking GenAI's Role in Visualization

Usability Testing of an Explainable AI-enhanced Tool for Clinical Decision Support: Insights from the Reflexive Thematic Analysis

Explanation-Driven Interventions for Artificial Intelligence Model Customization: Empowering End-Users to Tailor Black-Box AI in Rhinocytology

A moving target in AI-assisted decision-making: Dataset shift, model updating, and the problem of update opacity

A Multimedia Analytics Model for the Foundation Model Era

Evaluation of the impact of expert knowledge: How decision support scores impact the effectiveness of automatic knowledge-driven feature engineering (aKDFE)

FAME: Introducing Fuzzy Additive Models for Explainable AI

Built with on top of