Federated Learning Advances

The field of federated learning is moving towards more efficient, privacy-preserving, and personalized approaches. Researchers are exploring new methods to address the challenges of data heterogeneity, limited computational resources, and privacy concerns. Notably, there is a growing interest in developing frameworks that can adapt to diverse client settings, such as those with heterogeneous label sets or limited data. Additionally, techniques like knowledge distillation, representation fine-tuning, and sparse Mixture of Experts are being investigated to improve model performance and reduce communication overhead.

Some noteworthy papers in this area include: Pareto Actor-Critic for Communication and Computation Co-Optimization, which introduces a game-theoretic framework for joint optimization of client assignment and resource allocation. FedProtoKD, which proposes a dual knowledge distillation mechanism to improve system performance in heterogeneous federated learning settings. FedReFT, which introduces a novel approach to fine-tune client representations using sparse intervention layers and All-But-Me aggregation.

Sources

Pareto Actor-Critic for Communication and Computation Co-Optimization in Non-Cooperative Federated Learning Services

Closer to Reality: Practical Semi-Supervised Federated Learning for Foundation Model Adaptation

Degree of Staleness-Aware Data Updating in Federated Learning

MetaFed: Advancing Privacy, Performance, and Sustainability in Federated Metaverse Systems

Choice Outweighs Effort: Facilitating Complementary Knowledge Fusion in Federated Learning via Re-calibration and Merit-discrimination

Wait-free Replicated Data Types and Fair Reconciliation

FFT-MoE: Efficient Federated Fine-Tuning for Foundation Models via Large-scale Sparse MoE under Heterogeneous Edge

Federated Learning with Heterogeneous and Private Label Sets

FedProtoKD: Dual Knowledge Distillation with Adaptive Class-wise Prototype Margin for Heterogeneous Federated Learning

Towards Instance-wise Personalized Federated Learning via Semi-Implicit Bayesian Prompt Tuning

FedReFT: Federated Representation Fine-Tuning with All-But-Me Aggregation

Built with on top of