The field of federated learning is moving towards a greater emphasis on security and information-theoretic guarantees. Recent developments have focused on designing protocols that can protect against data reconstruction attacks and ensure the confidentiality of client updates. Researchers are exploring new techniques, such as per-element masking strategies and neural network-based estimators, to enhance the security of federated learning systems. These advances have the potential to enable more efficient and secure collaboration among clients, while also providing insights into the fundamental limits of secure aggregation. Notable papers in this area include:
- Information-Theoretic Decentralized Secure Aggregation with Collusion Resilience, which establishes the fundamental performance limits of decentralized secure aggregation.
- Per-element Secure Aggregation against Data Reconstruction Attacks in Federated Learning, which proposes a novel enhancement to secure aggregation that prevents the exposure of under-contributed elements.
- Neural Estimation of Information Leakage for Secure Communication System Design, which presents an improved mutual information estimator based on the variational contrastive log-ration upper bound framework.