The field of federated learning is moving towards addressing the challenges of security and privacy in collaborative model training. Researchers are exploring innovative solutions to protect against various types of attacks, such as Byzantine attacks, model poisoning, and backdoor attacks. The development of robust and privacy-preserving federated learning frameworks is a key direction, with a focus on designing more secure and efficient systems. Notable papers in this area include: Deciphering the Interplay between Attack and Protection Complexity in Privacy-Preserving Federated Learning, which introduces a novel theoretical framework to analyze the trade-offs between privacy guarantees and system utility. Fed-DPRoC, a novel federated learning framework that ensures differential privacy, Byzantine robustness, and communication efficiency. FedUP, a lightweight federated unlearning algorithm that efficiently mitigates malicious clients' influence by pruning specific connections within the attacked model. DOPA, a novel framework that simulates heterogeneous local training dynamics to craft universally effective and stealthy backdoor triggers. BadFU, the first backdoor attack in the context of federated unlearning, demonstrating that an adversary can inject backdoors into the global model through seemingly legitimate unlearning requests.
Advances in Federated Learning Security and Privacy
Sources
Deciphering the Interplay between Attack and Protection Complexity in Privacy-Preserving Federated Learning
When Secure Aggregation Falls Short: Achieving Long-Term Privacy in Asynchronous Federated Learning for LEO Satellite Networks
On the Security and Privacy of Federated Learning: A Survey with Attacks, Defenses, Frameworks, Applications, and Future Directions