Federated Learning and Privacy Advances

The field of federated learning is moving towards increased focus on privacy and security. Researchers are exploring new methods to protect sensitive data while enabling collaborative model training. One direction is the development of differentially private clustering methods, which allow for the analysis of distributed data without compromising individual privacy. Another area of research is the improvement of federated learning algorithms to withstand Byzantine attacks and ensure robust aggregation of client gradients. Additionally, there is a growing interest in developing new types of federated clustering that can handle complex data partitioning scenarios and preserve privacy. Notable papers in this area include: Differentially Private Federated $k$-Means Clustering with Server-Side Data, which presents a new algorithm for $k$-means clustering that is fully-federated and differentially private. Differentially Private Explanations for Clusters, which introduces a framework that provides explanations for black-box clustering results while satisfying differential privacy. PrivTru, which offers a technical perspective on data trustees guided by privacy-by-design principles and introduces an instantiation of a data trustee that provably achieves optimal privacy properties. FedGA-Tree, which explores an alternative approach that utilizes a genetic algorithm to facilitate the construction of personalized decision trees and accommodate categorical and numerical data. Boosting Gradient Leakage Attacks, which empirically demonstrates that clients' data can still be effectively reconstructed even within realistic FL environments. Generalization Error Analysis for Attack-Free and Byzantine-Resilient Decentralized Learning, which presents fine-grained generalization error analysis for both attack-free and Byzantine-resilient decentralized learning with heterogeneous data. Wavelet Scattering Transform and Fourier Representation for Offline Detection of Malicious Clients, which proposes a detection algorithm that labels malicious clients before training using locally computed compressed representations. Weighted Loss Methods for Robust Federated Learning, which introduces a weighted loss that aligns honest worker gradients despite data heterogeneity and facilitates the identification of Byzantines' gradients. Private Aggregation for Byzantine-Resilient Heterogeneous Federated Learning, which proposes a multi-stage method to achieve information-theoretic privacy guarantees and Byzantine resilience under data heterogeneity. A new type of federated clustering, which proposes data collaboration clustering, a novel federated clustering method that supports clustering over complex data partitioning scenarios.

Sources

Differentially Private Federated $k$-Means Clustering with Server-Side Data

Differentially Private Explanations for Clusters

PrivTru: A Privacy-by-Design Data Trustee Minimizing Information Leakage

Federated Learning on Stochastic Neural Networks

FedGA-Tree: Federated Decision Tree using Genetic Algorithm

Boosting Gradient Leakage Attacks: Data Reconstruction in Realistic FL Settings

Generalization Error Analysis for Attack-Free and Byzantine-Resilient Decentralized Learning with Data Heterogeneity

Wavelet Scattering Transform and Fourier Representation for Offline Detection of Malicious Clients in Federated Learning

Weighted Loss Methods for Robust Federated Learning under Data Heterogeneity

Private Aggregation for Byzantine-Resilient Heterogeneous Federated Learning

A new type of federated clustering: A non-model-sharing approach

Built with on top of