Advances in Fairness and Bias Mitigation in Machine Learning

The field of machine learning is moving towards a greater emphasis on fairness and bias mitigation, with a focus on developing methods that can balance competing demands for accuracy and fairness. Recent research has highlighted the challenges of achieving fairness in machine learning, including the potential for zero-sum trade-offs between different groups. However, innovative approaches such as proportional optimal transport and adversarial fair multi-view clustering have shown promise in achieving fairness improvements without sacrificing overall performance. Notable papers in this area include: FairPOT, which proposes a novel post-processing framework for balancing AUC performance and fairness. Adversarial Fair Multi-View Clustering, which integrates fairness learning into the representation learning process to ensure that cluster assignments are unaffected by sensitive attributes. Argumentative Debates for Transparent Bias Detection, which contributes a novel interpretable and explainable method for bias detection relying on debates about the presence of bias against individuals. Competing Risks, which theoretically demonstrates why treating competing risks as censoring introduces substantial bias in survival estimates and amplifies disparities.

Sources

Software Fairness Dilemma: Is Bias Mitigation a Zero-Sum Game?

FairPOT: Balancing AUC Performance and Fairness with Proportional Optimal Transport

Adversarial Fair Multi-View Clustering

Argumentative Debates for Transparent Bias Detection [Technical Report]

Competing Risks: Impact on Risk Estimation and Algorithmic Fairness

Built with on top of