Advances in Game Theory and Multi-Agent Learning

The field of game theory and multi-agent learning is witnessing significant developments, with a focus on collaborative learning, optimal regret, and convergence to Nash equilibria. Researchers are exploring new algorithms and techniques to improve cooperation and decision-making in complex environments. Notably, innovative approaches to regularization, optimism, and focusing influence mechanisms are being proposed to address challenges in multi-agent reinforcement learning. A key direction is the investigation of finite-horizon strategies in infinite-horizon games, which has led to new insights into the convergence of costs and the computation of feedback Nash equilibria. Noteworthy papers include:

  • 'On the optimal regret of collaborative personalized linear bandits', which provides an information-theoretic lower bound and proposes a two-stage collaborative algorithm to achieve optimal regret.
  • 'Optimism Without Regularization: Constant Regret in Zero-Sum Games', which shows that optimistic fictitious play can obtain constant regret without regularization, a surprising result with implications for game theory and multi-agent learning.

Sources

On the optimal regret of collaborative personalized linear bandits

Solving Zero-Sum Convex Markov Games

Optimal Online Bookmaking for Any Number of Outcomes

Optimism Without Regularization: Constant Regret in Zero-Sum Games

Center of Gravity-Guided Focusing Influence Mechanism for Multi-Agent Reinforcement Learning

Finite-Horizon Strategy in Infinite-Horizon Linear-Quadratic Discrete-Time Dynamic Games

On the necessity of adaptive regularisation:Optimal anytime online learning on $\boldsymbol{\ell_p}$-balls

Built with on top of