The field of Explainable Artificial Intelligence (XAI) is moving towards a more human-centered approach, focusing on designing accessible, transparent, and trustworthy AI experiences. Researchers are working on developing frameworks and methods that bridge the gap between technical explainability and user-centered design, enabling designers to create AI interactions that foster better understanding, trust, and responsible AI adoption. Notable papers in this area include:
- A study on the role of explanation styles and perceived accuracy on decision making in Predictive Process Monitoring, which found that perceived accuracy and explanation style have a significant effect on decision-making.
- The introduction of CopilotLens, a novel interactive framework that provides transparent and explainable AI coding agents, offering a concrete framework for building future agentic code assistants that prioritize clarity of reasoning over speed of suggestion.