Advancements in Explainable AI and Human-Centric System Design

The field of artificial intelligence is moving towards a more human-centric approach, focusing on explainability, transparency, and user trust. Recent research has emphasized the importance of designing systems that can elicit appropriate emotional reactions from users, while also ensuring that these systems are fair, reliable, and respectful of user autonomy. Large language models (LLMs) are being used to improve the efficiency and explainability of normative requirements elicitation and consistency analysis. Additionally, there is a growing interest in developing tools and frameworks that can automatically generate explainability requirements and software explanations from user reviews. Noteworthy papers in this area include:

  • Model Cards Revisited, which proposes a revised model card framework that holistically addresses ethical AI requirements.
  • Hierarchical Interaction Summarization and Contrastive Prompting for Explainable Recommendations, which introduces a novel approach for generating high-quality explanation for recommendations.
  • ROS Help Desk, which provides an intuitive error explanations and debugging support for robotics systems.

Sources

Tool for Supporting Debugging and Understanding of Normative Requirements Using LLMs

Complexity Results of Persuasion

Model Cards Revisited: Bridging the Gap Between Theory and Practice for Ethical AI Requirements

Hierarchical Interaction Summarization and Contrastive Prompting for Explainable Recommendations

The Emotional Alignment Design Policy

Automatic Generation of Explainability Requirements and Software Explanations From User Reviews

ROS Help Desk: GenAI Powered, User-Centric Framework for ROS Error Diagnosis and Debugging

Plausible Counterfactual Explanations of Recommendations

Built with on top of