Fairness and Privacy in Machine Learning

The field of machine learning is moving towards developing more fair and private models. Researchers are exploring new methods to address bias and disparity in models, particularly in high-stakes domains such as healthcare. One direction is to develop algorithms that can learn fair representations without requiring individual demographic information. Another area of focus is on improving the sample efficiency of differentially private fine-tuning of large language models.

Noteworthy papers include:

  • Unbiased Binning: Fairness-aware Attribute Representation, which introduces the unbiased binning problem and develops efficient algorithms to solve it.
  • SoftAdaClip: A Smooth Clipping Strategy for Fair and Private Model Training, which proposes a differentially private training method that replaces hard clipping with a smooth transformation to preserve relative gradient magnitudes.
  • Demographic-Agnostic Fairness without Harm, which proposes a novel optimization algorithm that jointly learns a group classifier and a set of decoupled classifiers to achieve fairness without requiring individual demographic information.

Sources

Unbiased Binning: Fairness-aware Attribute Representation

Demographic-Agnostic Fairness without Harm

Dynamic Necklace Splitting

Sample-Efficient Differentially Private Fine-Tuning via Gradient Matrix Denoising

SoftAdaClip: A Smooth Clipping Strategy for Fair and Private Model Training

Private and Fair Machine Learning: Revisiting the Disparate Impact of Differentially Private SGD

Built with on top of