Advances in Secure Federated Learning and Homomorphic Encryption

The field of federated learning and homomorphic encryption is rapidly evolving, with a focus on developing robust and efficient methods to protect user privacy and prevent attacks. Recent research has explored innovative approaches to address the challenges of Byzantine attacks, heterogeneity, and high-dimensional models in federated learning. Notably, new mechanisms have been proposed to identify and exclude poisoned models, and to aggregate client updates while preserving collaborative signals. Additionally, advancements in homomorphic encryption have led to the development of more efficient and scalable frameworks, including binary variants of existing schemes and selective encryption methods. These developments have significant implications for the deployment of secure and private machine learning models in real-world applications. Noteworthy papers include: FedGuard, which proposes a novel federated learning mechanism to defend against Byzantine attacks, and SenseCrypt, which introduces a sensitivity-guided selective homomorphic encryption framework for cross-device federated learning. SelectiveShield is also notable for its lightweight hybrid defense framework that adaptively integrates selective homomorphic encryption and differential privacy. Furthermore, PrivDFS presents a new paradigm for private inference that replaces a single exposed representation with distributed feature sharing, ensuring strong privacy guarantees while maintaining model utility.

Sources

FedGuard: A Diverse-Byzantine-Robust Mechanism for Federated Learning with Major Malicious Clients

A Non-leveled and Reliable Approximate FHE Framework through Binarized Polynomial Rings

Heterogeneity-Oblivious Robust Federated Learning

SenseCrypt: Sensitivity-guided Selective Homomorphic Encryption for Joint Federated Learning in Cross-Device Scenarios

Evaluating Selective Encryption Against Gradient Inversion Attacks

SelectiveShield: Lightweight Hybrid Defense Against Gradient Leakage in Federated Learning

From Split to Share: Private Inference with Distributed Feature Sharing

Built with on top of