Advances in Robust Multi-Agent Reinforcement Learning

The field of multi-agent reinforcement learning (MARL) is moving towards developing more robust and resilient systems. Recent research has focused on addressing the challenges of fault tolerance, adversarial attacks, and partially observable environments. One of the key directions is the development of algorithms that can learn to mitigate the impact of failures and adversarial perturbations, such as the Multi-Agent Robust Training Algorithm (MARTA) and Constrained Black-Box Attacks Against Multi-Agent Reinforcement Learning. Another area of research is the development of methods for unsupervised partner design and adaptive social learning, which enable more effective and flexible learning in heterogeneous-agent settings. Noteworthy papers include Unsupervised Partner Design Enables Robust Ad-hoc Teamwork, which introduces a population-free, multi-agent reinforcement learning framework for robust ad-hoc teamwork, and Fault Tolerant Multi-Agent Learning with Adversarial Budget Constraints, which proposes a plug-and-play framework for training MARL agents to be resilient to potentially severe faults.

Sources

Adversarial Attacks on Reinforcement Learning-based Medical Questionnaire Systems: Input-level Perturbation Strategies and Medical Constraint Validation

Policy Optimization in Multi-Agent Settings under Partially Observable Environments

Unsupervised Partner Design Enables Robust Ad-hoc Teamwork

Perpetual exploration in anonymous synchronous networks with a Byzantine black hole

Fault Tolerant Multi-Agent Learning with Adversarial Budget Constraints

Constrained Black-Box Attacks Against Multi-Agent Reinforcement Learning

Multi-Agent Trust Region Policy Optimisation: A Joint Constraint Approach

Built with on top of