Advances in Federated Learning Security and Privacy

The field of federated learning is moving towards addressing the challenges of security and privacy in collaborative model training. Researchers are exploring innovative solutions to protect against various types of attacks, such as Byzantine attacks, model poisoning, and backdoor attacks. The development of robust and privacy-preserving federated learning frameworks is a key direction, with a focus on designing more secure and efficient systems. Notable papers in this area include: Deciphering the Interplay between Attack and Protection Complexity in Privacy-Preserving Federated Learning, which introduces a novel theoretical framework to analyze the trade-offs between privacy guarantees and system utility. Fed-DPRoC, a novel federated learning framework that ensures differential privacy, Byzantine robustness, and communication efficiency. FedUP, a lightweight federated unlearning algorithm that efficiently mitigates malicious clients' influence by pruning specific connections within the attacked model. DOPA, a novel framework that simulates heterogeneous local training dynamics to craft universally effective and stealthy backdoor triggers. BadFU, the first backdoor attack in the context of federated unlearning, demonstrating that an adversary can inject backdoors into the global model through seemingly legitimate unlearning requests.

Sources

Deciphering the Interplay between Attack and Protection Complexity in Privacy-Preserving Federated Learning

Robust Federated Learning under Adversarial Attacks via Loss-Based Client Clustering

Argos: A Decentralized Federated System for Detection of Traffic Signs in CAVs

Fed-DPRoC:Communication-Efficient Differentially Private and Robust Federated Learning

When Secure Aggregation Falls Short: Achieving Long-Term Privacy in Asynchronous Federated Learning for LEO Satellite Networks

On the Security and Privacy of Federated Learning: A Survey with Attacks, Defenses, Frameworks, Applications, and Future Directions

FedUP: Efficient Pruning-based Federated Unlearning for Model Poisoning Attacks

Federated Action Recognition for Smart Worker Assistance Using FastPose

DOPA: Stealthy and Generalizable Backdoor Attacks from a Single Client under Challenging Federated Constraints

BadFU: Backdoor Federated Learning through Adversarial Machine Unlearning

Built with on top of