Explainable AI and Human-Centered Design

The field of artificial intelligence is moving towards more transparent and explainable systems, with a focus on human-centered design. Recent developments have highlighted the importance of providing explanations that are tailored to the needs of users, particularly in applications such as loan decision systems and conversational AI. Researchers are also exploring the use of multimodal interfaces and adaptive assistance frameworks to improve user trust and autonomy. Noteworthy papers in this area include: Assist-as-needed Control for FES in Foot Drop Management, which proposes a novel closed-loop FES controller that dynamically adjusts stimulation intensity based on real-time toe clearance. Onto-Epistemological Analysis of AI Explanations, which investigates ontological and epistemological assumptions in explainability methods and highlights the risks of ignoring these assumptions when choosing an XAI method. Explainable but Vulnerable: Adversarial Attacks on XAI Explanation in Cybersecurity Applications, which explores adversarial attack techniques on explanation alteration and their effects on model decisions. Kantian-Utilitarian XAI: Meta-Explained, which presents a gamified explainable AI system for ethically aware consumer decision-making. Trust in Transparency: How Explainable AI Shapes User Perceptions, which explores the integration of contextual explanations into AI-powered loan decision systems to enhance trust and usability. Evaluating Node-tree Interfaces for AI Explainability, which evaluates user experiences with node-tree interfaces and chatbot interfaces to assess their performance in exploratory and decision-making tasks. Assist-As-Needed: Adaptive Multimodal Robotic Assistance for Medication Management in Dementia Care, which presents an adaptive multimodal robotic framework that dynamically adjusts assistance based on real-time assessment of user needs. It feels like hard work trying to talk to it: Understanding Older Adults' Experiences of Encountering and Repairing Conversational Breakdowns with AI Systems, which investigates how older adults navigate and repair conversational breakdowns with voice-based AI systems. Sometimes You Need Facts, and Sometimes a Hug: Understanding Older Adults' Preferences for Explanations in LLM-Based Conversational AI Systems, which explores older adults' preferences for explanations in conversational AI systems. The Feature Understandability Scale for Human-Centred Explainable AI: Assessing Tabular Feature Importance, which introduces psychometrically validated scales to assess users' understanding of tabular input features for supervised classification problems.

Sources

Assist-as-needed Control for FES in Foot Drop Management

Onto-Epistemological Analysis of AI Explanations

Explainable but Vulnerable: Adversarial Attacks on XAI Explanation in Cybersecurity Applications

Kantian-Utilitarian XAI: Meta-Explained

Trust in Transparency: How Explainable AI Shapes User Perceptions

Evaluating Node-tree Interfaces for AI Explainability

Assist-As-Needed: Adaptive Multimodal Robotic Assistance for Medication Management in Dementia Care

"It feels like hard work trying to talk to it": Understanding Older Adults' Experiences of Encountering and Repairing Conversational Breakdowns with AI Systems

"Sometimes You Need Facts, and Sometimes a Hug": Understanding Older Adults' Preferences for Explanations in LLM-Based Conversational AI Systems

The Feature Understandability Scale for Human-Centred Explainable AI: Assessing Tabular Feature Importance

Built with on top of