The field of federated learning is moving towards addressing the challenges of statistical heterogeneity among clients and improving the performance of global models. Researchers are exploring various approaches, including personalized federated learning, adaptive latent-space constraints, and multi-layer hierarchical federated learning, to enhance model adaptability and training efficiency in heterogeneous environments. Noteworthy papers in this area include:
- FedADP, which proposes a unified model aggregation framework for federated learning with heterogeneous model architectures, achieving an accuracy improvement of up to 23.30% compared to existing methods.
- KARULA, a regularized strategy for personalized federated learning that constrains pairwise model dissimilarities between clients based on the difference in their distributions, demonstrating effectiveness on synthetic and real federated data sets.