The field of decision-making systems is moving towards incorporating fairness and transparency into its models. Recent research has focused on developing frameworks that can provide actionable recommendations while ensuring that the decisions made are fair and unbiased. This includes the use of reinforcement learning to generate durable and valid recommendations, as well as the development of fairness-aware methods that can make transparent trade-offs between performance and fairness. Another area of focus is on explaining proxy discrimination and unfairness in individual decisions made by AI systems, with the goal of identifying and mitigating structural biases. Noteworthy papers in this area include: Reinforcement Learning for Durable Algorithmic Recourse, which presents a novel time-aware framework for algorithmic recourse. FairViT-GAN, a hybrid framework that integrates a CNN branch for local feature extraction and a ViT branch for global context modeling, and introduces an adversarial debiasing mechanism to mitigate algorithmic bias. GESA, a comprehensive framework that addresses the limitations of current state-of-the-art approaches to candidate-role matching through the integration of domain-adaptive transformer embeddings, heterogeneous self-supervised graph neural networks, and adversarial debiasing mechanisms.
Fairness and Transparency in Decision-Making Systems
Sources
Fairness-Aware Reinforcement Learning (FAReL): A Framework for Transparent and Balanced Sequential Decision-Making
FairViT-GAN: A Hybrid Vision Transformer with Adversarial Debiasing for Fair and Explainable Facial Beauty Prediction