The field of AI is moving towards a greater emphasis on explainability and transparency, with a focus on developing techniques and frameworks that can provide insights into the decision-making processes of AI systems. This shift is driven by the need to build trust in AI systems and to ensure that they are fair, accountable, and reliable. Recent research has explored various approaches to explainability, including the use of neuro-symbolic frameworks, large language models, and multimodal attention-based models. These approaches aim to provide user-centered explanations that are tailored to the needs of diverse audiences. Notable papers in this area include MetaExplainer, which generates multi-type user-centered explanations for AI systems, and FIRE, which provides faithful and interpretable recommendation explanations. Overall, the field is moving towards a more transparent and explainable AI, with a focus on developing techniques that can provide insights into the decision-making processes of AI systems.
Explainability and Transparency in AI Systems
Sources
Explainable AI and Machine Learning for Exam-based Student Evaluation: Causal and Predictive Analysis of Socio-academic and Economic Factors
Context-Aware Visualization for Explainable AI Recommendations in Social Media: A Vision for User-Aligned Explanations
Screen Matters: Cognitive and Behavioral Divergence Between Smartphone-Native and Computer-Native Youth
Personalized Knowledge Transfer Through Generative AI: Contextualizing Learning to Individual Career Goals