Advances in Federated Learning and Distributed Optimization

The field of federated learning and distributed optimization is witnessing significant developments, with a focus on improving communication efficiency, mitigating the impact of data heterogeneity, and enhancing the robustness of decentralized learning methods. Researchers are exploring novel techniques, such as multilevel Monte Carlo approaches, asynchronous federated learning mechanisms, and hierarchical reinforcement learning frameworks, to address the challenges posed by distributed learning scenarios. Notable advances include the development of communication-efficient module-wise federated learning frameworks, programmable data plane acceleration architectures, and methods for estimating data influence cascades in decentralized environments. These innovations have the potential to improve the performance and scalability of distributed learning systems, enabling their application in a wide range of domains. Noteworthy papers include Beyond Communication Overhead, which introduces a novel Multilevel Monte Carlo compression scheme, and Air-FedGA, which proposes an AirComp-based grouping asynchronous federated learning mechanism. Ampere is also noteworthy, as it develops a novel collaborative training system that minimizes on-device computation and device-server communication while improving model accuracy.

Sources

Beyond Communication Overhead: A Multilevel Monte Carlo Approach for Mitigating Compression Bias in Distributed Learning

Air-FedGA: A Grouping Asynchronous Federated Learning Mechanism Exploiting Over-the-air Computation

Communication-Efficient Module-Wise Federated Learning for Grasp Pose Detection in Cluttered Environments

OLAF: Programmable Data Plane Acceleration for Asynchronous Distributed Reinforcement Learning

FineGrasp: Towards Robust Grasping for Delicate Objects

Efficient Federated Learning with Timely Update Dissemination

Failure Forecasting Boosts Robustness of Sim2Real Rhythmic Insertion Policies

A Single Merging Suffices: Recovering Server-based Learning Performance in Decentralized Learning

Hierarchical Reinforcement Learning for Articulated Tool Manipulation with Multifingered Hand

DICE: Data Influence Cascade in Decentralized Learning

Distributed Training under Packet Loss

Ampere: Communication-Efficient and High-Accuracy Split Federated Learning

Improving the Price of Anarchy via Predictions in Parallel-Link Networks

Built with on top of