Advances in Stochastic Optimization and Differential Equations

The field of stochastic optimization and differential equations is witnessing significant developments, with a focus on improving the efficiency and accuracy of numerical methods. Researchers are exploring new approaches to solve stochastic differential equations (SDEs) and optimize complex systems, leading to innovative algorithms and techniques. One notable trend is the development of stochastic Runge-Kutta methods, which are designed to converge with high order in $L^p$-norm. Another area of interest is the analysis of momentum-based algorithms, including quasi-hyperbolic momentum and adaptive preconditioning, which are being studied to understand their effectiveness in stochastic nonconvex optimization. These advancements have the potential to impact various fields, including deep learning and scientific computing. Noteworthy papers include: SGD with Adaptive Preconditioning, which provides a unified convergence analysis of SGD with adaptive preconditioning and establishes the connection between several popular algorithms. A Class of Stochastic Runge-Kutta Methods for Stochastic Differential Equations, which develops a new class of efficient stochastic Runge-Kutta methods that converge with order 1 in $L^p$-norm.

Sources

A Class of Stochastic Runge-Kutta Methods for Stochastic Differential Equations Converging with Order 1 in $L^p$-Norm

Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size

SGD with Adaptive Preconditioning: Unified Analysis and Momentum Acceleration

Analysis of Muon's Convergence and Critical Batch Size

Built with on top of