The field of federated learning is moving towards increased adoption of differential privacy techniques to protect client data. Researchers are exploring various methods to balance the trade-off between privacy and model accuracy, including strategic incentivization, random rebalancing, and multi-hop privacy propagation. Notable papers in this area include:
- A paper that proposes a token-based incentivization mechanism for locally differentially private federated learning, which achieves a balance between privacy and accuracy.
- A paper that introduces a novel algorithm for differentially private decentralized min-max optimization, which provides theoretical bounds for privacy guarantees and demonstrates effectiveness in experiments.
- A paper that presents a robust pipeline for differentially private federated learning on imbalanced clinical data, which achieves a high recall while maintaining strong privacy guarantees.