Advances in Explainable AI and Model Interpretability

The field of Explainable AI (XAI) is rapidly advancing, with a focus on developing techniques to interpret and understand the decisions made by complex machine learning models. Recent research has explored various approaches to explainability, including model-agnostic explanations, feature attribution methods, and transparent model design. One notable trend is the integration of XAI methods with other areas of AI research, such as natural language processing and computer vision, to provide more comprehensive and informative explanations. Additionally, there is a growing emphasis on evaluating the effectiveness and robustness of XAI methods, with a focus on developing rigorous evaluation metrics and frameworks. Noteworthy papers in this area include PersonaTwin, which introduces a multi-tier prompt conditioning framework for generating and evaluating personalized digital twins, and Rule2Text, which presents a framework for generating natural language explanations of knowledge graph rules. Overall, the field of XAI is moving towards more transparent, interpretable, and trustworthy AI systems.

Sources

PersonaTwin: A Multi-Tier Prompt Conditioning Framework for Generating and Evaluating Personalized Digital Twins

Rule2Text: A Framework for Generating and Evaluating Natural Language Explanations of Knowledge Graph Rules

SHLIME: Foiling adversarial attacks fooling SHAP and LIME

Model Interpretability and Rationale Extraction by Input Mask Optimization

Rationalizing Transformer Predictions via End-To-End Differentiable Self-Training

Informative Post-Hoc Explanations Only Exist for Simple Functions

Predicting and Explaining Traffic Crash Severity Through Crash Feature Selection

A Comprehensive Perspective on Explainable AI across the Machine Learning Workflow

AI-Augmented CI/CD Pipelines: From Code Commit to Production with Autonomous Decisions

Learning Marked Temporal Point Process Explanations based on Counterfactual and Factual Reasoning

Rigorous Feature Importance Scores based on Shapley Value and Banzhaf Index

Interpreting Time Series Forecasts with LIME and SHAP: A Case Study on the Air Passengers Dataset

From Fomo3D to Lottery DAPP: Analysis of Ethereum-Based Gambling Applications

fCrit: A Visual Explanation System for Furniture Design Creative Support

A One-Class Explainable AI Framework for Identification of Non-Stationary Concurrent False Data Injections in Nuclear Reactor Signals

Organization Matters: A Qualitative Study of Organizational Dynamics in Red Teaming Practices For Generative AI

How can we trust opaque systems? Criteria for robust explanations in XAI

Reliability, Embeddedness, and Agency: A Utility-Driven Mathematical Framework for Agent-Centric AI Adoption

Using AI for User Representation: An Analysis of 83 Persona Prompts

Hierarchical Evaluation Function (HEF): A Multi-Metric Approach for Optimizing Demand Forecasting Models

Explainability of Algorithms

ITL-LIME: Instance-Based Transfer Learning for Enhancing Local Explanations in Low-Resource Data Settings

The AI-Fraud Diamond: A Novel Lens for Auditing Algorithmic Deception

Documenting Deployment with Fabric: A Repository of Real-World AI Governance

Explainable Information Design

Breakable Machine: A K-12 Classroom Game for Transformative AI Literacy Through Spoofing and eXplainable AI (XAI)

Multi-Rationale Explainable Object Recognition via Contrastive Conditional Inference

Exact Shapley Attributions in Quadratic-time for FANOVA Gaussian Processes

Towards LLM-generated explanations for Component-based Knowledge Graph Question Answering Systems

A Fuzzy-Enhanced Explainable AI Framework for Flight Continuous Descent Operations Classification

Multilingual Datasets for Custom Input Extraction and Explanation Requests Parsing in Conversational XAI Systems

Argumentation for Explainable Workforce Optimisation (with Appendix)

Locally Pareto-Optimal Interpretations for Black-Box Machine Learning Models

ExBigBang: A Dynamic Approach for Explainable Persona Classification through Contextualized Hybrid Transformer Analysis

Built with on top of