Transparent and Explainable AI Systems

The field of artificial intelligence is moving towards the development of transparent and explainable systems. This shift is driven by the need for accountability, trust, and regulatory compliance in complex decision-making processes. Recent research has focused on integrating symbolic reasoning with sub-symbolic learning, enabling the development of neuro-symbolic approaches that provide transparent and user-centric systems. Furthermore, there is a growing emphasis on explainability in various applications, including recommender systems, phishing detection, and medical diagnostics. Noteworthy papers in this area include 'Explain, Don't Just Warn!' which presents a real-time framework for generating phishing warnings with contextual cues, and 'Explainability Through Human-Centric Design for XAI in Lung Cancer Detection' which introduces a human-centric, expert-guided concept bottleneck model for interpretable lung cancer diagnosis. These papers demonstrate the importance of explainability and transparency in AI systems, and highlight the need for continued research in this area to ensure that AI systems are trustworthy and effective.

Sources

Differentiable Fuzzy Neural Networks for Recommender Systems

"Explain, Don't Just Warn!" -- A Real-Time Framework for Generating Phishing Warnings with Contextual Cues

Interpretable Event Diagnosis in Water Distribution Networks

DocVXQA: Context-Aware Visual Explanations for Document Question Answering

Data Ethics in the Fediverse: Analyzing the Role of Instance Policies in Mastodon Research

Evaluating Explanation Quality in X-IDS Using Feature Alignment Metrics

Visually Interpretable Subtask Reasoning for Visual Question Answering

TikTok Search Recommendations: Governance and Research Challenges

Item Level Exploration Traffic Allocation in Large-scale Recommendation Systems

Display Content, Display Methods and Evaluation Methods of the HCI in Explainable Recommender Systems: A Survey

Diffusion Recommender Models and the Illusion of Progress: A Concerning Study of Reproducibility and a Conceptual Mismatch

Explainability Through Human-Centric Design for XAI in Lung Cancer Detection

Post-Post-API Age: Studying Digital Platforms in Scant Data Access Times

A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support

Force-Driven Validation for Collaborative Robotics in Automated Avionics Testing

Built with on top of