The field of artificial intelligence is moving towards a more human-centric approach, focusing on explainability, transparency, and user trust. Recent research has emphasized the importance of designing systems that can elicit appropriate emotional reactions from users, while also ensuring that these systems are fair, reliable, and respectful of user autonomy. Large language models (LLMs) are being used to improve the efficiency and explainability of normative requirements elicitation and consistency analysis. Additionally, there is a growing interest in developing tools and frameworks that can automatically generate explainability requirements and software explanations from user reviews. Noteworthy papers in this area include:
- Model Cards Revisited, which proposes a revised model card framework that holistically addresses ethical AI requirements.
- Hierarchical Interaction Summarization and Contrastive Prompting for Explainable Recommendations, which introduces a novel approach for generating high-quality explanation for recommendations.
- ROS Help Desk, which provides an intuitive error explanations and debugging support for robotics systems.