Multi-Agent Systems and Distributed Learning

The field of multi-agent systems is moving towards more complex and heterogeneous environments, where agents with different capabilities and resources must cooperate to achieve common goals. Recent research has focused on developing distributed algorithms that can handle these complexities, such as distributed Nash equilibrium seeking algorithms and policy gradient methods with self-attention. These approaches have shown promising results in various settings, including multi-robot systems and multi-agent games. Noteworthy papers in this area include: Distributed Nash Equilibrium Seeking Algorithm in Aggregative Games for Heterogeneous Multi-Robot Systems, which proposes a distributed optimisation algorithm that calculates the Nash equilibrium as a tailored reference for each robot. Policy Gradient with Self-Attention for Model-Free Distributed Nonlinear Multi-Agent Games, which demonstrates strong performance in several settings, including distributed linear and nonlinear regulation, and simulated and real multi-robot pursuit-and-evasion games.

Sources

Distributed Nash Equilibrium Seeking Algorithm in Aggregative Games for Heterogeneous Multi-Robot Systems

Policy Gradient with Self-Attention for Model-Free Distributed Nonlinear Multi-Agent Games

The Heterogeneous Multi-Agent Challenge

Choose Your Battles: Distributed Learning Over Multiple Tug of War Games

Adaptive Event-Triggered Policy Gradient for Multi-Agent Reinforcement Learning

Built with on top of