The field of federated learning is moving towards addressing the challenges of data heterogeneity, client model heterogeneity, and scalability. Researchers are exploring novel architectures and algorithms to improve the efficiency and accuracy of federated learning models. Notable trends include the use of mixture of experts, hypernetworks, and dual prototype learning to enhance model performance and generalization.
Particularly noteworthy papers include: Multi-Task Dense Prediction Fine-Tuning with Mixture of Fine-Grained Experts, which introduces a novel Fine-Grained Mixture of Experts architecture for multi-task learning. FedSWA: Improving Generalization in Federated Learning with Highly Heterogeneous Data via Momentum-Based Stochastic Controlled Weight Averaging, which proposes a novel federated learning algorithm with stochastic weight averaging to improve generalization in highly heterogeneous data. DAG-AFL: Directed Acyclic Graph-based Asynchronous Federated Learning, which presents a decentralized and scalable framework for asynchronous federated learning using directed acyclic graphs. H2Tune: Federated Foundation Model Fine-Tuning with Hybrid Heterogeneity, which addresses the challenges of hybrid heterogeneity in federated fine-tuning of foundation models.