The field of stochastic optimization is moving towards a deeper understanding of the global dynamics of stochastic gradient descent (SGD) and its variants, with a focus on avoiding sharp local minima and achieving better generalization performance. Recent work has also explored the convergence of stochastic gradient Langevin dynamics and the development of high-order numerical schemes for stochastic partial differential equations (SPDEs). Additionally, there is a growing interest in nonparametric estimation of invariant measures from multiscale data. Notable papers include:
- Global Dynamics of Heavy-Tailed SGDs in Nonconvex Loss Landscape, which reveals a fascinating phenomenon in deep learning where injecting and truncating heavy-tailed noises during training can improve generalization performance.
- Higher order numerical schemes for SPDEs with additive Noise, which presents high-order numerical schemes for linear stochastic heat and wave equations with Dirichlet boundary conditions.