Explainability and Human-Centered AI

The field of artificial intelligence is shifting towards a more human-centered approach, with a growing emphasis on explainability and transparency in AI systems. Recent research has focused on developing new methods and frameworks for explaining AI decisions, designing human-centered AI experiences, and evaluating the quality of explanations. A key trend is the recognition that explanations should be designed and evaluated with a specific end in mind, taking into account the needs and preferences of users. Another area of focus is the development of objective metrics for assessing the quality of explanations, such as veracity and fidelity. Notable papers in this area include:

  • Explanations are a means to an end, which argues that explanations should be designed and evaluated with a specific end in mind.
  • Towards a Signal Detection Based Measure for Assessing Information Quality of Explainable Recommender Systems, which proposes an objective metric to evaluate the information quality of explanations.
  • Effective Explanations for Belief-Desire-Intention Robots, which investigates user preferences for explanation demand and content in human-robot interaction.

Sources

Conversations with Andrea: Visitors' Opinions on Android Robots in a Museum

Explanations are a means to an end

A User Experience 3.0 (UX 3.0) Paradigm Framework: Designing for Human-Centered AI Experiences

Towards a Signal Detection Based Measure for Assessing Information Quality of Explainable Recommender Systems

Effective Explanations for Belief-Desire-Intention Robots: When and What to Explain

Human-Centered Explainability in Interactive Information Systems: A Survey

Measurement as Bricolage: Examining How Data Scientists Construct Target Variables for Predictive Modeling Tasks

Built with on top of