The field of multi-agent systems is moving towards developing more robust and adaptable agents that can collaborate and compete with each other in complex environments. Researchers are exploring new methods for agent modeling, such as multi-retrieval and dynamic generation, to improve collaboration and competition with unseen teammates and opponents. The use of communication signals and emergent cooperation is also being investigated, with a focus on how agents can develop a shared behavioral system through learning and signaling. Additionally, joint evolution dynamics and bilateral team formation are being studied to improve the performance and generalization of multi-agent systems. Noteworthy papers include:
- Generalizable Agent Modeling for Agent Collaboration-Competition Adaptation with Multi-Retrieval and Dynamic Generation, which introduces a new modeling approach that effectively models both teammates and opponents using their behavioral trajectories.
- JoyAgents-R1: Joint Evolution Dynamics for Versatile Multi-LLM Agents with Reinforcement Learning, which proposes a joint evolution dynamics method that achieves holistic equilibrium with optimal decision-making and memory capabilities.