The field of Explainable AI (XAI) is moving towards developing more transparent and trustworthy models by enhancing explanation methods and evaluating their faithfulness. Recent research focuses on addressing the limitations of current saliency-based methods and introducing new frameworks for visual explanations that align with human understanding and inquiry. Noteworthy papers include:
- Beyond saliency: enhancing explanation of speech emotion recognition with expert-referenced acoustic cues, which proposes a framework that links saliency to expert-referenced acoustic cues of speech emotions.
- Rethinking Saliency Maps: A Cognitive Human Aligned Taxonomy and Evaluation Framework for Explanations, which introduces a principled conceptual framework that organizes saliency explanations along two essential axes.