Distributed Optimization and Machine Learning Advances

The field of distributed optimization and machine learning is witnessing significant developments, with a focus on improving the convergence rate and addressing non-convexity in distributed algorithms. Researchers are exploring innovative methods to enhance the performance of federated learning models, including the use of reference models and Bayesian fine-tuning. Theoretical understanding of local update algorithms is also being advanced, with a focus on characterizing the role of data heterogeneity and smoothness. Furthermore, robust algorithms for non-IID machine learning problems and GPU-based complete search methods for nonlinear minimization are being proposed. Notable papers include: FedRef, which proposes a communication-efficient Bayesian fine-tuning method with a reference model to overcome catastrophic forgetting. A Robust Algorithm for Non-IID Machine Learning Problems with Convergence Analysis, which provides a rigorous proof of convergence for a proposed algorithm under mild assumptions.

Sources

Momentum-based Accelerated Algorithm for Distributed Optimization under Sector-Bound Nonlinearity

FedRef: Communication-Efficient Bayesian Fine Tuning with Reference Model

What Makes Local Updates Effective: The Role of Data Heterogeneity and Smoothness

A Robust Algorithm for Non-IID Machine Learning Problems with Convergence Analysis

GPU-based complete search for nonlinear minimization subject to bounds

In-Training Multicalibrated Survival Analysis for Healthcare via Constrained Optimization

Built with on top of