The field of artificial intelligence is moving towards the development of transparent and explainable systems. This shift is driven by the need for accountability, trust, and regulatory compliance in complex decision-making processes. Recent research has focused on integrating symbolic reasoning with sub-symbolic learning, enabling the development of neuro-symbolic approaches that provide transparent and user-centric systems. Furthermore, there is a growing emphasis on explainability in various applications, including recommender systems, phishing detection, and medical diagnostics. Noteworthy papers in this area include 'Explain, Don't Just Warn!' which presents a real-time framework for generating phishing warnings with contextual cues, and 'Explainability Through Human-Centric Design for XAI in Lung Cancer Detection' which introduces a human-centric, expert-guided concept bottleneck model for interpretable lung cancer diagnosis. These papers demonstrate the importance of explainability and transparency in AI systems, and highlight the need for continued research in this area to ensure that AI systems are trustworthy and effective.
Transparent and Explainable AI Systems
Sources
"Explain, Don't Just Warn!" -- A Real-Time Framework for Generating Phishing Warnings with Contextual Cues
Display Content, Display Methods and Evaluation Methods of the HCI in Explainable Recommender Systems: A Survey