The field of federated learning is moving towards addressing key challenges in scalability, privacy, and data heterogeneity. Researchers are exploring innovative methods to improve the accuracy and efficiency of federated learning models, including new client selection algorithms, secure aggregation methods, and knowledge distillation techniques. Noteworthy papers in this area include the proposal of Knowledgeable Client Insertion, which introduces a small number of knowledgeable clients to improve learning accuracy, and the development of an adaptive clustering scheme for client selections, which dynamically adjusts the number of clusters to reduce communication costs. Another notable work is the demonstration of training a neural network using fully homomorphic encryption, enabling privacy-preserving and efficient communication. Additionally, the proposal of a novel client-level assessment of collaborative backdoor poisoning in non-IID federated learning highlights the importance of addressing security vulnerabilities in federated learning scenarios.
Federated Learning Advances
Sources
Optimising Intrusion Detection Systems in Cloud-Edge Continuum with Knowledge Distillation for Privacy-Preserving and Efficient Communication
Diversity-Driven Learning: Tackling Spurious Correlations and Data Heterogeneity in Federated Models