The field of decentralized optimization and federated learning is moving towards improving communication efficiency, reducing computational overhead, and enhancing model accuracy. Researchers are exploring innovative methods to exploit similarity among nodes, adapt to heterogeneous edge devices, and optimize training scheduling to minimize total training time. Noteworthy papers include:
- A method that achieves state-of-the-art communication and computational complexities within the proximal decentralized optimization framework by refining the analysis of existing methods and proposing a stabilized proximal decentralized optimization method.
- A heterogeneity-aware split federated learning framework that adaptively controls batch sizes and model splitting to balance communication-computing latency and training convergence in heterogeneous edge networks.
- An enhanced asynchronous AdaBoost framework for federated learning that incorporates adaptive communication scheduling and delayed weight compensation to reduce synchronization frequency and communication overhead.
- A load-aware training scheduling mechanism that minimizes total training time in decentralized federated learning by accounting for both computational and communication loads.
- A graph-based gossiping mechanism that optimizes network structure and scheduling for efficient communication across various network topologies and message capacities. These developments have the potential to significantly improve the efficiency, scalability, and robustness of decentralized optimization and federated learning systems.