Explainable AI for High-Stakes Decision-Making

The field of Artificial Intelligence is moving towards developing more transparent and trustworthy systems, particularly in high-stakes decision-making environments such as healthcare and finance. Explainable AI (XAI) techniques are being explored to provide insights into AI decision-making, enhancing model reliability and aiding users in verifying AI-generated assessments. Researchers are working on integrating legal considerations into XAI systems, ensuring that explanations are actionable and contestable. Additionally, there is a growing interest in applying XAI to various domains, including food engineering and nuclear energy. Noteworthy papers include: Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-being, which introduces an evaluation framework for developing explainable AI systems in healthcare. Legally-Informed Explainable AI, which makes the case for integrating legal considerations into XAI systems. eXplainable AI for data driven control, which proposes an XAI methodology based on Inverse Optimal Control to obtain local explanations for the behavior of a controller.

Sources

Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-being

Explainable Artificial Intelligence techniques for interpretation of food datasets: a review

Legally-Informed Explainable AI

eXplainable AI for data driven control: an inverse optimal control approach

Towards an AI Observatory for the Nuclear Sector: A tool for anticipatory governance

Enhancing Explainability and Reliable Decision-Making in Particle Swarm Optimization through Communication Topologies

Questions: A Taxonomy for Critical Reflection in Machine-Supported Decision-Making

In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?

Built with on top of