Advances in Explainable AI and Transparency

The field of artificial intelligence is moving towards a greater emphasis on explainability and transparency, with a focus on developing techniques that can provide insights into the decision-making processes of AI models. Recent research has highlighted the importance of understanding how AI models use language and make decisions, and has developed new methods for attributing model answers to specific regions of visual data. Additionally, there is a growing recognition of the need for more interpretable and trustworthy AI systems, with a focus on developing new cognitive architectures that can shape language and provide more deliberate and reflective interaction with AI explanations. Noteworthy papers in this area include: RADAR, which introduces a semi-automatic approach to obtain a benchmark dataset for evaluating and enhancing the capabilities of multimodal large language models to attribute their reasoning process. CausalSent, which develops a two-headed RieszNet-based neural network architecture for interpretable sentiment classification with causal inference. Model Science, which introduces a conceptual framework for a new discipline that places the trained model at the core of analysis, aiming to interact, verify, explain, and control its behavior across diverse operational contexts.

Sources

Humans Perceive Wrong Narratives from AI Reasoning Texts

To Explain Or Not To Explain: An Empirical Investigation Of AI-Based Recommendations On Social Media Platforms

RADAR: A Reasoning-Guided Attribution Framework for Explainable Visual Data Analysis

DashboardQA: Benchmarking Multimodal Agents for Question Answering on Interactive Dashboards

CausalSent: Interpretable Sentiment Classification with RieszNet

Semantic Attractors and the Emergence of Meaning: Towards a Teleological Model of AGI

Enhancing XAI Interpretation through a Reverse Mapping from Insights to Visualizations

Interpretable by AI Mother Tongue: Native Symbolic Reasoning in Neural Models

Burst: Collaborative Curation in Connected Social Media Communities

Model Science: getting serious about verification, explanation and control of AI systems

AI reasoning effort mirrors human decision time on content moderation tasks

Built with on top of