Federated Learning Security Advances

The field of federated learning is moving towards addressing the vulnerabilities of its decentralized architecture, with a focus on developing innovative defense strategies against various types of attacks. Recent research has highlighted the importance of robust aggregation methods, hyperparameter tuning, and secure aggregation protocols in preventing backdoor attacks and model poisoning. Notably, the development of adaptive attack strategies has also led to the creation of more effective defense mechanisms. Overall, the field is advancing towards more secure and robust federated learning frameworks. Noteworthy papers include: FedThief, which proposes a novel self-centered federated learning attack paradigm, and Hammer and Anvil, which presents a principled defense approach against backdoors in federated learning. Additionally, Stealth by Conformity introduces an adaptive poisoning strategy that evades robust aggregation defenses, while Prototype-Guided Robust Learning proposes a defense against backdoor attacks that overcomes existing limitations.

Sources

FedThief: Harming Others to Benefit Oneself in Self-Centered Federated Learning

Backdoor Poisoning Attack Against Face Spoofing Attack Detection Methods

On Hyperparameters and Backdoor-Resistance in Horizontal Federated Learning

On Evaluating the Poisoning Robustness of Federated Learning under Local Differential Privacy

Hammer and Anvil: A Principled Defense Against Backdoors in Federated Learning

DSFL: A Dual-Server Byzantine-Resilient Federated Learning Framework via Group-Based Secure Aggregation

Stealth by Conformity: Evading Robust Aggregation through Adaptive Poisoning

Silent Until Sparse: Backdoor Attacks on Semi-Structured Sparsity

Prototype-Guided Robust Learning against Backdoor Attacks

Built with on top of