Explainable AI for Complex Models

The field of Explainable AI (XAI) is moving towards developing more transparent and trustworthy models by enhancing explanation methods and evaluating their faithfulness. Recent research focuses on addressing the limitations of current saliency-based methods and introducing new frameworks for visual explanations that align with human understanding and inquiry. Noteworthy papers include:

  • Beyond saliency: enhancing explanation of speech emotion recognition with expert-referenced acoustic cues, which proposes a framework that links saliency to expert-referenced acoustic cues of speech emotions.
  • Rethinking Saliency Maps: A Cognitive Human Aligned Taxonomy and Evaluation Framework for Explanations, which introduces a principled conceptual framework that organizes saliency explanations along two essential axes.

Sources

Beyond saliency: enhancing explanation of speech emotion recognition with expert-referenced acoustic cues

Rethinking Saliency Maps: A Cognitive Human Aligned Taxonomy and Evaluation Framework for Explanations

Mind the Gaps: Measuring Visual Artifacts in Dimensionality Reduction

FunnyNodules: A Customizable Medical Dataset Tailored for Evaluating Explainable AI

Learning from Sufficient Rationales: Analysing the Relationship Between Explanation Faithfulness and Token-level Regularisation Strategies

Correlation-Aware Feature Attribution Based Explainable AI

Built with on top of