The field of federated learning is moving towards personalized and privacy-preserving approaches. Recent research has focused on developing novel algorithms and frameworks that can handle non-IID data distributions, mitigate model poisoning attacks, and ensure secure aggregation. Notably, techniques such as adaptive collaboration, distance-based aggregation, and pliable index coding have shown promise in improving the accuracy and efficiency of federated learning models. Furthermore, the integration of federated learning with other areas like graph neural networks and recommendation systems has led to the development of new architectures and methods. Some papers have proposed innovative solutions to address the challenges of federated learning, including CLoVE, which utilizes client embeddings derived from model losses to identify and separate clients from different clusters, and SABRE-FL, which filters poisoned prompt updates using an embedding-space anomaly detector. Other notable papers include Detect & Score, which proposes a method for privacy-preserving misbehaviour detection and contribution evaluation, and Flotilla, a scalable and modular federated learning framework. Overall, the field of federated learning is rapidly advancing, with a focus on developing more robust, efficient, and privacy-preserving methods for distributed machine learning.
Advances in Federated Learning
Sources
Detect \& Score: Privacy-Preserving Misbehaviour Detection and Contribution Evaluation in Federated Learning
Privacy-Preserving Federated Learning Scheme with Mitigating Model Poisoning Attacks: Vulnerabilities and Countermeasures
Accuracy and Security-Guaranteed Participant Selection and Beamforming Design for RIS-Assisted Federated Learning
DARTS: A Dual-View Attack Framework for Targeted Manipulation in Federated Sequential Recommendation