The field of Artificial Intelligence is moving towards increased transparency and accountability, with a focus on explainability and fairness. Recent studies have explored the use of explainability methods to detect and interpret unfairness, and have proposed pipelines to derive fairness-related insights. The intersection of explainability and fairness has emerged as a crucial area to promote responsible AI systems. Researchers are also investigating the impact of biased databases on prediction algorithms and the influence of anonymisation on the quality of predictions. Furthermore, innovative methods are being developed to provide global explanations for outlier detection and to advance the state-of-the-art in explainable machine learning pipelines. Noteworthy papers include:
- Explanations as Bias Detectors, which proposes a pipeline to leverage explainability methods for fairness exploration.
- Robust ML Auditing using Prior Knowledge, which introduces a novel approach to manipulation-proof auditing using prior knowledge about the ground truth.