Advances in Online Learning and Optimization

The field of online learning and optimization is rapidly evolving, with a focus on developing innovative methods that can adapt to complex and dynamic environments. Recent research has emphasized the importance of designing algorithms that can handle non-stationary data, strategic interactions, and high-dimensional spaces. One notable direction is the development of universal online learning methods that can achieve optimal regret guarantees without requiring prior knowledge of the curvature of online functions. Another area of interest is the design of preconditioners for stochastic gradient descent, which can significantly improve the convergence rate and stability of the algorithm. Additionally, researchers are exploring the use of adaptive optimizers and non-Euclidean descent methods to improve the efficiency and effectiveness of optimization algorithms. Noteworthy papers include: Designing Preconditioners for SGD, which introduces a framework for analyzing and designing preconditioners for stochastic gradient descent, and Adaptivity and Universality, which presents a novel approach to universal online learning that achieves both universality and adaptivity. Strategy-robust Online Learning in Contextual Pricing is also noteworthy for its introduction of a strategy-robust notion of regret and a polynomial-time approximation scheme for learning linear pricing policies in adversarial environments.

Sources

Bipartiteness in Progressive Second-Price Multi-Auction Networks with Perfect Substitute

Designing Preconditioners for SGD: Local Conditioning, Noise Floors, and Basin Stability

Strategy-robust Online Learning in Contextual Pricing

Adaptivity and Universality: Problem-dependent Universal Regret for Online Convex Optimization

A Tale of Two Geometries: Adaptive Optimizers and Non-Euclidean Descent

Built with on top of