Advances in Explainable AI and Fairness

The field of Artificial Intelligence is moving towards increased transparency and accountability, with a focus on explainability and fairness. Recent studies have explored the use of explainability methods to detect and interpret unfairness, and have proposed pipelines to derive fairness-related insights. The intersection of explainability and fairness has emerged as a crucial area to promote responsible AI systems. Researchers are also investigating the impact of biased databases on prediction algorithms and the influence of anonymisation on the quality of predictions. Furthermore, innovative methods are being developed to provide global explanations for outlier detection and to advance the state-of-the-art in explainable machine learning pipelines. Noteworthy papers include:

  • Explanations as Bias Detectors, which proposes a pipeline to leverage explainability methods for fairness exploration.
  • Robust ML Auditing using Prior Knowledge, which introduces a novel approach to manipulation-proof auditing using prior knowledge about the ground truth.

Sources

Explanations as Bias Detectors: A Critical Study of Local Post-hoc XAI Methods for Fairness Exploration

Explainable AI for Correct Root Cause Analysis of Product Quality in Injection Moulding

Study of the influence of a biased database on the prediction of standard algorithms for selecting the best candidate for an interview

Algorithmic Accountability in Small Data: Sample-Size-Induced Bias Within Classification Metrics

Extending Decision Predicate Graphs for Comprehensive Explanation of Isolation Forest

From Incidents to Insights: Patterns of Responsibility following AI Harms

Robust ML Auditing using Prior Knowledge

Built with on top of