Federated Learning Developments

The field of federated learning is moving towards addressing the challenges of statistical heterogeneity among clients and improving the performance of global models. Researchers are exploring various approaches, including personalized federated learning, adaptive latent-space constraints, and multi-layer hierarchical federated learning, to enhance model adaptability and training efficiency in heterogeneous environments. Noteworthy papers in this area include:

  • FedADP, which proposes a unified model aggregation framework for federated learning with heterogeneous model architectures, achieving an accuracy improvement of up to 23.30% compared to existing methods.
  • KARULA, a regularized strategy for personalized federated learning that constrains pairwise model dissimilarities between clients based on the difference in their distributions, demonstrating effectiveness on synthetic and real federated data sets.

Sources

Interaction-Aware Parameter Privacy-Preserving Data Sharing in Coupled Systems via Particle Filter Reinforcement Learning

FedADP: Unified Model Aggregation for Federated Learning with Heterogeneous Model Architectures

Adaptive Latent-Space Constraints in Personalized FL

Personalized Federated Learning under Model Dissimilarity Constraints

Multi-Layer Hierarchical Federated Learning with Quantization

A federated Kaczmarz algorithm

Approximated Behavioral Metric-based State Projection for Federated Reinforcement Learning

Enhancing the Performance of Global Model by Improving the Adaptability of Local Models in Federated Learning

Built with on top of