Explainability and Transparency in AI Systems

The field of AI is moving towards a greater emphasis on explainability and transparency, with a focus on developing techniques and frameworks that can provide insights into the decision-making processes of AI systems. This shift is driven by the need to build trust in AI systems and to ensure that they are fair, accountable, and reliable. Recent research has explored various approaches to explainability, including the use of neuro-symbolic frameworks, large language models, and multimodal attention-based models. These approaches aim to provide user-centered explanations that are tailored to the needs of diverse audiences. Notable papers in this area include MetaExplainer, which generates multi-type user-centered explanations for AI systems, and FIRE, which provides faithful and interpretable recommendation explanations. Overall, the field is moving towards a more transparent and explainable AI, with a focus on developing techniques that can provide insights into the decision-making processes of AI systems.

Sources

MetaExplainer: A Framework to Generate Multi-Type User-Centered Explanations for AI Systems

Explainable AI and Machine Learning for Exam-based Student Evaluation: Causal and Predictive Analysis of Socio-academic and Economic Factors

Demo: TOSense -- What Did You Just Agree to?

Context-Aware Visualization for Explainable AI Recommendations in Social Media: A Vision for User-Aligned Explanations

Transparent Adaptive Learning via Data-Centric Multimodal Explainable AI

Screen Matters: Cognitive and Behavioral Divergence Between Smartphone-Native and Computer-Native Youth

From App Features to Explanation Needs: Analyzing Correlations and Predictive Potential

Are Today's LLMs Ready to Explain Well-Being Concepts?

Personalized Knowledge Transfer Through Generative AI: Contextualizing Learning to Individual Career Goals

Decoding the Multimodal Maze: A Systematic Review on the Adoption of Explainability in Multimodal Attention-based Models

AI Should Be More Human, Not More Complex

FIRE: Faithful Interpretable Recommendation Explanations

An Explainable Natural Language Framework for Identifying and Notifying Target Audiences In Enterprise Communication

Built with on top of