Differential Privacy Advances in Federated Learning and Optimization

The field of federated learning and optimization is moving towards a greater emphasis on differential privacy, with several recent developments aimed at improving the trade-off between convergence, privacy, and fairness. Researchers are exploring new algorithms and techniques to protect user data while maintaining accurate and efficient learning outcomes. One notable direction is the integration of differential privacy with existing federated learning frameworks, allowing for more secure and reliable personalized learning. Another area of focus is the development of novel matrix factorizations and optimization methods that can handle unbounded streams of data and provide smooth error bounds. Additionally, there is a growing interest in applying local differential privacy to distributed aggregative optimization problems, which has the potential to guarantee rigorous privacy and accurate convergence in cooperative optimization and multi-agent control systems.

Noteworthy papers include:

  • DP-Ditto, which presents a non-trivial extension of the Ditto personalized federated learning framework under differential privacy and achieves state-of-the-art performance in terms of fairness and accuracy.
  • Private Continual Counting of Unbounded Streams, which introduces novel matrix factorizations based on logarithmic perturbations and achieves smooth error bounds with reduced space and time complexity.
  • Local Differential Privacy for Distributed Stochastic Aggregative Optimization with Guaranteed Optimality, which proposes an algorithm that guarantees both accurate convergence and rigorous differential privacy in distributed aggregative optimization.
  • Facility Location Problem under Local Differential Privacy without Super-set Assumption, which presents an LDP algorithm that achieves a constant approximation ratio with a relatively small additive factor, outperforming the straightforward approach on synthetic and real-world datasets.

Sources

Convergence-Privacy-Fairness Trade-Off in Personalized Federated Learning

Private Continual Counting of Unbounded Streams

Local Differential Privacy for Distributed Stochastic Aggregative Optimization with Guaranteed Optimality

Facility Location Problem under Local Differential Privacy without Super-set Assumption

Built with on top of