Advances in Multi-Agent Reinforcement Learning

The field of multi-agent reinforcement learning (MARL) is rapidly advancing, with a focus on developing more efficient, scalable, and generalizable methods. Recent research has explored the use of continuous-time value iteration, physics-informed neural networks, and sequential world models to improve the performance of MARL algorithms. Additionally, there is a growing interest in developing benchmarks and evaluation metrics for MARL, such as the HLSMAC benchmark, to assess the strategic decision-making capabilities of agents. Noteworthy papers in this area include: Continuous-Time Value Iteration for Multi-Agent Reinforcement Learning, which proposes a CT-MARL framework that uses physics-informed neural networks to approximate HJB-based value functions at scale. HLSMAC: A New StarCraft Multi-Agent Challenge for High-Level Strategic Decision-Making, which introduces a new cooperative MARL benchmark with 12 carefully designed StarCraft II scenarios to challenge agents with diverse strategic elements.

Sources

Continuous-Time Value Iteration for Multi-Agent Reinforcement Learning

HLSMAC: A New StarCraft Multi-Agent Challenge for High-Level Strategic Decision-Making

Empowering Multi-Robot Cooperation via Sequential World Models

Collaborative Loco-Manipulation for Pick-and-Place Tasks with Dynamic Reward Curriculum

Constructive Conflict-Driven Multi-Agent Reinforcement Learning for Strategic Diversity

Multi-Quadruped Cooperative Object Transport: Learning Decentralized Pinch-Lift-Move

CRAFT: Coaching Reinforcement Learning Autonomously using Foundation Models for Multi-Robot Coordination Tasks

Local-Canonicalization Equivariant Graph Neural Networks for Sample-Efficient and Generalizable Swarm Robot Control

LEED: A Highly Efficient and Scalable LLM-Empowered Expert Demonstrations Framework for Multi-Agent Reinforcement Learning

Scalable Multi-Objective Robot Reinforcement Learning through Gradient Conflict Resolution

Built with on top of