Efficient Decentralized Optimization and Federated Learning

The field of decentralized optimization and federated learning is moving towards improving communication efficiency, reducing computational overhead, and enhancing model accuracy. Researchers are exploring innovative methods to exploit similarity among nodes, adapt to heterogeneous edge devices, and optimize training scheduling to minimize total training time. Noteworthy papers include:

  • A method that achieves state-of-the-art communication and computational complexities within the proximal decentralized optimization framework by refining the analysis of existing methods and proposing a stabilized proximal decentralized optimization method.
  • A heterogeneity-aware split federated learning framework that adaptively controls batch sizes and model splitting to balance communication-computing latency and training convergence in heterogeneous edge networks.
  • An enhanced asynchronous AdaBoost framework for federated learning that incorporates adaptive communication scheduling and delayed weight compensation to reduce synchronization frequency and communication overhead.
  • A load-aware training scheduling mechanism that minimizes total training time in decentralized federated learning by accounting for both computational and communication loads.
  • A graph-based gossiping mechanism that optimizes network structure and scheduling for efficient communication across various network topologies and message capacities. These developments have the potential to significantly improve the efficiency, scalability, and robustness of decentralized optimization and federated learning systems.

Sources

Exploiting Similarity for Computation and Communication-Efficient Decentralized Optimization

HASFL: Heterogeneity-aware Split Federated Learning over Edge Computing Systems

Integrating Asynchronous AdaBoost into Federated Learning: Five Real World Applications

Load-Aware Training Scheduling for Model Circulation-based Decentralized Federated Learning

Federated Learning within Global Energy Budget over Heterogeneous Edge Accelerators

Graph-based Gossiping for Communication Efficiency in Decentralized Federated Learning

Built with on top of