Advances in Fairness and Bias Mitigation in AI

The field of artificial intelligence is moving towards a greater emphasis on fairness and bias mitigation, with a focus on developing innovative methods to address these issues in various applications. Recent research has highlighted the importance of considering intersectional fairness, fairness in regression tasks, and fairness in face attribute classification, among other areas. Researchers are proposing new approaches, such as metric-based fairness measures and adaptive meta-learning-based sample reweighting, to address the challenges of fairness and bias in AI systems. Noteworthy papers in this area include Intersectional Divergence, which proposes a fairness measure for regression tasks, and Component-Based Fairness in Face Attribute Classification with Bayesian Network-informed Meta Learning, which introduces a fairness notion defined by biological face features. Additionally, Fair Uncertainty Quantification for Depression Prediction proposes a fairness-aware optimization strategy for depression prediction, and Domain Adversarial Training for Mitigating Gender Bias in Speech-based Mental Health Detection introduces a domain adversarial training approach to address gender bias in speech-based mental health detection.

Sources

Intersectional Divergence: Measuring Fairness in Regression

Component-Based Fairness in Face Attribute Classification with Bayesian Network-informed Meta Learning

Gone With the Bits: Revealing Racial Bias in Low-Rate Neural Compression for Facial Images

Domain Adversarial Training for Mitigating Gender Bias in Speech-based Mental Health Detection

Fairness Perceptions in Regression-based Predictive Models

Fair Uncertainty Quantification for Depression Prediction

Built with on top of