Advances in Online Learning and Game Theory

The field of online learning and game theory is witnessing significant developments, with a focus on improving regret bounds, convergence rates, and adaptivity in various settings. Researchers are exploring new algorithms and techniques to address challenges in online optimization, game theory, and decision-making under uncertainty. Notably, there is a growing interest in developing more efficient and robust methods for online learning, such as proximal regret and proximal correlated equilibria, which are refining our understanding of equilibrium concepts in games. Furthermore, advances in stochastic optimization, online bilevel optimization, and distributed zeroth-order optimization are expanding the scope of online learning to more complex and dynamic environments.

Some noteworthy papers in this area include: A Polynomial-time Algorithm for Online Sparse Linear Regression with Improved Regret Bound under Weaker Conditions, which introduces a new polynomial-time algorithm with improved regret bounds. Near Optimal Convergence to Coarse Correlated Equilibrium in General-Sum Markov Games, which improves the convergence rate to coarse correlated equilibrium in general-sum Markov games. Gradient-Variation Online Adaptivity for Accelerated Optimization with H"older Smoothness, which designs a gradient-variation online learning algorithm with strong adaptivity over existing methods.

Sources

A Polynomial-time Algorithm for Online Sparse Linear Regression with Improved Regret Bound under Weaker Conditions

A Tight Lower Bound for Non-stochastic Multi-armed Bandits with Expert Advice

Stochastic Regret Guarantees for Online Zeroth- and First-Order Bilevel Optimization

From Best Responses to Learning: Investment Efficiency in Dynamic Environment

Proximal Regret and Proximal Correlated Equilibria: A New Tractable Solution Concept for Online Learning and Games

Near Optimal Convergence to Coarse Correlated Equilibrium in General-Sum Markov Games

Online Distributed Zeroth-Order Optimization With Non-Zero-Mean Adverse Noises

Gradient-Variation Online Adaptivity for Accelerated Optimization with H\"older Smoothness

Predictive Compensation in Finite-Horizon LQ Games under Gauss-Markov Deviations

Compact Quantitative Theories of Convex Algebras

Free-order secretary for two-sided independence systems

Online Algorithms for Repeated Optimal Stopping: Achieving Both Competitive Ratio and Regret Bounds

Built with on top of