Advances in Fairness and Transparency in Machine Learning

The field of machine learning is moving towards a greater emphasis on fairness and transparency, with a focus on developing methods that can mitigate bias and ensure equitable treatment of all individuals. Recent research has highlighted the importance of considering the impact of data distribution shifts on model performance and fairness, and has proposed new approaches for training models that are robust to these shifts. Additionally, there is a growing interest in developing methods for auditing and testing machine learning models for fairness, including the use of concolic testing and regularisation techniques. Noteworthy papers in this area include: Who Pays for Fairness, which introduces a novel fairness framework based on social burden, and Machine Learning with Multitype Protected Attributes, which proposes a distance covariance regularisation framework for achieving intersectional fairness.

Sources

When the Past Misleads: Rethinking Training Data Expansion Under Temporal Distribution Shifts

Exploring the Design Space of Fair Tree Learning Algorithms

Who Pays for Fairness? Rethinking Recourse under Social Burden

Audits Under Resource, Data, and Access Constraints: Scaling Laws For Less Discriminatory Alternatives

Concolic Testing on Individual Fairness of Neural Network Models

Machine Learning with Multitype Protected Attributes: Intersectional Fairness through Regularisation

Built with on top of