Fairness in Machine Learning

The field of machine learning is moving towards a greater emphasis on fairness and reducing bias in algorithms. Researchers are exploring new methods to detect and mitigate biases in data, such as developing data bias profiles and fairness-based grouping approaches for continuous sensitive variables. Additionally, there is a focus on creating fairness-aware algorithms, including fair epsilon nets, fair deepfake detection, and fair representation learning. These innovative approaches aim to promote equitable predictions and reduce discrimination in machine learning models. Notable papers in this area include: Fair Epsilon Net and Geometric Hitting Set, which introduces fairness to classical geometric approximation problems, and Fairness-Aware Grouping for Continuous Sensitive Variables, which proposes a fairness-based grouping approach for continuous sensitive attributes.

Sources

On Fair Epsilon Net and Geometric Hitting Set

Underrepresentation, Label Bias, and Proxies: Towards Data Bias Profiles for the EU AI Act and Beyond

Fair-FLIP: Fair Deepfake Detection with Fairness-Oriented Final Layer Input Prioritising

Confounder-Free Continual Learning via Recursive Feature Normalization

Fair CCA for Fair Representation Learning: An ADNI Study

Fairness-Aware Grouping for Continuous Sensitive Variables: Application for Debiasing Face Analysis with respect to Skin Tone

Nonlinear Concept Erasure: a Density Matching Approach

Built with on top of