Advances in Federated Learning and Privacy Preservation

The field of federated learning is moving towards developing more secure and privacy-preserving methods for collaborative model training. Researchers are exploring innovative approaches to mitigate threats such as data reconstruction attacks, gradient-based attacks, and untargeted attacks. One notable direction is the use of explainable AI and targeted detection and mitigation strategies to identify and address malicious layers within models. Another area of focus is the development of robust aggregation methods that can detect and remove malicious models, thereby defending against untargeted attacks. Additionally, there is a growing interest in integrating differential privacy, homomorphic encryption, and other privacy-preserving techniques into federated learning pipelines to protect sensitive client data. Notable papers include: Random Client Selection on Contrastive Federated Learning for Tabular Data, which presents a comprehensive experimental analysis of gradient-based attacks in CFL environments and evaluates random client selection as a defensive strategy. Nosy Layers, Noisy Fixes: Tackling DRAs in Federated Learning Systems using Explainable AI, which introduces DRArmor, a novel defense mechanism that integrates Explainable AI with targeted detection and mitigation strategies for DRA. FedGraM: Defending Against Untargeted Attacks in Federated Learning via Embedding Gram Matrix, which proposes a novel robust aggregation method designed to defend against untargeted attacks in FL.

Sources

Random Client Selection on Contrastive Federated Learning for Tabular Data

Nosy Layers, Noisy Fixes: Tackling DRAs in Federated Learning Systems using Explainable AI

Verifiably Forgotten? Gradient Differences Still Enable Data Reconstruction in Federated Unlearning

Locally Differentially Private Graph Clustering via the Power Iteration Method

Learning hidden cascades via classification

FedGraM: Defending Against Untargeted Attacks in Federated Learning via Embedding Gram Matrix

Personalized and Resilient Distributed Learning Through Opinion Dynamics

EC-LDA : Label Distribution Inference Attack against Federated Graph Learning with Embedding Compression

Prediction of Reposting on X

Privacy-Aware Cyberterrorism Network Analysis using Graph Neural Networks and Federated Learning

Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach

Fuzzy Information Evolution with Three-Way Decision in Social Network Group Decision-Making

Redefining Clustered Federated Learning for System Identification: The Path of ClusterCraft

Built with on top of