Explainability and Trust in AI Systems

The field of artificial intelligence is moving towards a more nuanced understanding of explainability and trust, recognizing the diverse needs and expectations of various stakeholders. Researchers are developing frameworks that align explanations with the epistemic, contextual, and ethical expectations of different stakeholders, and exploring the role of large language models in enhancing social explainability. The concept of trust is also being reexamined, with a focus on the complex relationships between trustor and trustee, and the need for more comprehensive models that capture these dynamics. Additionally, there is a growing interest in applying explainable AI methods to real-world problems, such as biodiversity monitoring and conservation. Noteworthy papers in this area include: The paper 'Explainability in Context: A Multilevel Framework Aligning AI Explanations with Stakeholder with LLMs' which proposes a multilevel framework for aligning explanations with stakeholder expectations. The paper 'Ties of Trust: a bowtie model to uncover trustor-trustee relationships in LLMs' which introduces a bowtie model for conceptualizing and formulating trust in LLMs. The paper 'From Images to Insights: Explainable Biodiversity Monitoring with Plain Language Habitat Explanations' which proposes an end-to-end visual-to-causal framework for explaining species habitat preferences.

Sources

Explainability in Context: A Multilevel Framework Aligning AI Explanations with Stakeholder with LLMs

The Lock-in Hypothesis: Stagnation by Algorithm

Ties of Trust: a bowtie model to uncover trustor-trustee relationships in LLMs

From Images to Insights: Explainable Biodiversity Monitoring with Plain Language Habitat Explanations

Built with on top of