Advances in Multi-Agent Reinforcement Learning

The field of multi-agent reinforcement learning (MARL) is rapidly advancing, with a focus on developing more efficient, scalable, and robust methods for coordinating agent behavior. Recent work has emphasized the importance of communication, cooperation, and adaptability in MARL systems, particularly in complex, dynamic environments. Researchers are exploring new approaches to address challenges such as partial observability, limited communication, and conflicting objectives. Notable developments include the use of hierarchical frameworks, graph-based methods, and decentralized control strategies. These innovations have the potential to improve performance in a wide range of applications, from traffic control and autonomous vehicles to smart grids and healthcare systems. Noteworthy papers include: Scalable Population Training for Zero-Shot Coordination, which proposes an efficient training framework for zero-shot coordination in MARL. HCPO: Hierarchical Conductor-Based Policy Optimization in Multi-Agent Reinforcement Learning, which introduces a conductor-based joint policy framework for cooperative MARL. Transformer-Based Scalable Multi-Agent Reinforcement Learning for Networked Systems with Long-Range Interactions, which presents a unified transformer-based MARL framework for modeling long-range dependencies in networked systems.

Sources

Robust and Efficient Communication in Multi-Agent Reinforcement Learning

Scalable Population Training for Zero-Shot Coordination

Deviation Dynamics in Cardinal Hedonic Games

Aspiration-based Perturbed Learning Automata in Games with Noisy Utility Measurements. Part A: Stochastic Stability in Non-zero-Sum Games

Convergence of Multiagent Learning Systems for Traffic control

Goal-Oriented Multi-Agent Reinforcement Learning for Decentralized Agent Teams

HCPO: Hierarchical Conductor-Based Policy Optimization in Multi-Agent Reinforcement Learning

Resilient and Efficient Allocation for Large-Scale Autonomous Fleets via Decentralized Coordination

Conditional Diffusion Model for Multi-Agent Dynamic Task Decomposition

Transformer-Based Scalable Multi-Agent Reinforcement Learning for Networked Systems with Long-Range Interactions

Quantifying Distribution Shift in Traffic Signal Control with Histogram-Based GEH Distance

Fair-GNE : Generalized Nash Equilibrium-Seeking Fairness in Multiagent Healthcare Automation

Z-Merge: Multi-Agent Reinforcement Learning for On-Ramp Merging with Zone-Specific V2X Traffic Information

Data-driven control of network systems: Accounting for communication adaptivity and security

Symmetry-Breaking in Multi-Agent Navigation: Winding Number-Aware MPC with a Learned Topological Strategy

Decentralized Gaussian Process Classification and an Application in Subsea Robotics

A Scenario Approach to the Robustness of Nonconvex-Nonconcave Minimax Problems

Built with on top of