Differential Privacy in Federated Learning

The field of federated learning is moving towards increased adoption of differential privacy techniques to protect client data. Researchers are exploring various methods to balance the trade-off between privacy and model accuracy, including strategic incentivization, random rebalancing, and multi-hop privacy propagation. Notable papers in this area include:

  • A paper that proposes a token-based incentivization mechanism for locally differentially private federated learning, which achieves a balance between privacy and accuracy.
  • A paper that introduces a novel algorithm for differentially private decentralized min-max optimization, which provides theoretical bounds for privacy guarantees and demonstrates effectiveness in experiments.
  • A paper that presents a robust pipeline for differentially private federated learning on imbalanced clinical data, which achieves a high recall while maintaining strong privacy guarantees.

Sources

Differentially Private Federated Clustering with Random Rebalancing

Strategic Incentivization for Locally Differentially Private Federated Learning

Enhancing Privacy in Decentralized Min-Max Optimization: A Differentially Private Approach

Multi-Hop Privacy Propagation for Differentially Private Federated Learning in Social Networks

A Robust Pipeline for Differentially Private Federated Learning on Imbalanced Clinical Data using SMOTETomek and FedProx

Federated Anomaly Detection for Multi-Tenant Cloud Platforms with Personalized Modeling

Built with on top of