The field of federated learning is moving towards more efficient and privacy-preserving methods. Researchers are exploring new techniques to reduce communication overhead, improve model accuracy, and enhance the security of Federated Learning (FL) systems. Notably, the development of adaptive methods, such as those utilizing gradient difference-based error modeling and second-order optimization, are gaining attention for their potential to accelerate training and improve convergence rates. Furthermore, there is a growing interest in multimodal federated learning, which involves leveraging multiple data modalities to improve downstream inference performance while preserving privacy.
In addition to these advancements, researchers are also focusing on the development of cost-aware and serverless workflows to optimize resource utilization and reduce expenses in FL environments. The exploration of novel algorithms and frameworks, such as those designed for joint cloud FaaS systems and federated split learning, is also underway to address the challenges of vendor lock-in, communication overhead, and client heterogeneity.
Some noteworthy papers in this area include: FedCostAware, which introduces a cost-aware scheduling algorithm to optimize synchronous FL on cloud spot instances, reducing cloud computing costs. Jointλ, a distributed runtime system that orchestrates serverless workflows on multiple FaaS systems without relying on a centralized orchestrator, achieving significant reductions in latency and cost. The Panaceas for Improving Low-Rank Decomposition, which proposes novel techniques to enhance the performance of low-rank decomposition methods in communication-efficient FL, achieving faster convergence and superior accuracy. FSL-SAGE, a federated split learning algorithm that estimates server-side gradient feedback via auxiliary models, reducing communication costs and client memory requirements while achieving state-of-the-art convergence rates.