Differential Privacy in Machine Learning

The field of machine learning is moving towards a greater emphasis on differential privacy, with a focus on developing algorithms and methods that can balance privacy and utility. Recent work has explored the application of differential privacy to various machine learning tasks, including stochastic linear bandits, language model fine-tuning, and principal component analysis. Notably, researchers have proposed novel frameworks and algorithms that can achieve differential privacy without significantly compromising model performance. Some particularly noteworthy papers in this area include:

  • Efficient Differentially Private Fine-Tuning of LLMs via Reinforcement Learning, which presents a framework for fine-tuning large language models with differential privacy guarantees.
  • Decentralized Differentially Private Power Method, which proposes a novel method for performing principal component analysis in networked multi-agent settings with differential privacy guarantees.

Sources

Secure Best Arm Identification in the Presence of a Copycat

Efficient Differentially Private Fine-Tuning of LLMs via Reinforcement Learning

Decentralized Differentially Private Power Method

Locally Differentially Private Thresholding Bandits

Differentially Private Clipped-SGD: High-Probability Convergence with Arbitrary Clipping Level

Built with on top of