Advancements in Multi-Agent Reinforcement Learning

The field of multi-agent reinforcement learning is moving towards more efficient and effective methods for training agents in complex environments. One of the key directions is the incorporation of human expertise and knowledge into the learning process, allowing agents to learn from humans and improve their performance. Another important area of research is the development of more interpretable and transparent models, which can provide insights into the decision-making processes of the agents. The use of attention mechanisms and hierarchical policies is also becoming increasingly popular, as it allows for more efficient communication and coordination between agents.

Noteworthy papers in this area include: Learning Individual Intrinsic Reward in Multi-Agent Reinforcement Learning via Incorporating Generalized Human Expertise, which proposes a novel framework for integrating human knowledge into MARL algorithms. Concept Learning for Cooperative Multi-Agent Reinforcement Learning, which introduces a novel value-based method for learning interpretable cooperation concepts. Enhancing Multi-Agent Collaboration with Attention-Based Actor-Critic Policies, which employs a centralized training/centralized execution scheme with multi-headed attention mechanisms to facilitate dynamic inter-agent communication.

Sources

Learning Individual Intrinsic Reward in Multi-Agent Reinforcement Learning via Incorporating Generalized Human Expertise

Minding Motivation: The Effect of Intrinsic Motivation on Agent Behaviors

Concept Learning for Cooperative Multi-Agent Reinforcement Learning

Learning from Expert Factors: Trajectory-level Reward Shaping for Formulaic Alpha Mining

"Teammates, Am I Clear?": Analysing Legible Behaviours in Teams

Probabilistic Active Goal Recognition

Enhancing Multi-Agent Collaboration with Attention-Based Actor-Critic Policies

Hierarchical Message-Passing Policies for Multi-Agent Reinforcement Learning

Built with on top of