The field of multi-agent systems and large language models is rapidly advancing, with a focus on developing efficient and effective collaboration mechanisms. Recent research has explored the use of distributed algorithms, modular task decomposition, and dynamic scheduling to enable multiple agents to work together seamlessly. Additionally, there is a growing interest in designing systems that can adapt to changing task requirements and optimize performance while minimizing costs. Noteworthy papers in this area include: Design for One, Deploy for Many, which proposes a distributed multi-agent maze traversal algorithm, Generalizing Test-time Compute-optimal Scaling as an Optimizable Graph, which introduces a framework for searching for compute-optimal model combinations and architectures, Modular Task Decomposition and Dynamic Collaboration in Multi-Agent Systems Driven by Large Language Models, which presents a multi-agent architecture for modular task decomposition and dynamic collaboration. Optimal-Agent-Selection, which proposes a state-aware routing framework for efficient multi-agent collaboration. The Collaboration Gap, which evaluates the collaborative capabilities of leading models and proposes a relay inference approach to improve outcomes. Controlling Performance and Budget of a Centralized Multi-agent LLM System with Reinforcement Learning, which introduces a centralized multi-LLM framework for cost-efficient and cost-controllable collaboration. OptiMA, which proposes a transaction-based framework for designing very complex multi-agent systems. Agentmandering, which reimagines redistricting as a turn-based negotiation between two agents representing opposing political interests.
Advances in Multi-Agent Systems and Large Language Models
Sources
Modular Task Decomposition and Dynamic Collaboration in Multi-Agent Systems Driven by Large Language Models
Controlling Performance and Budget of a Centralized Multi-agent LLM System with Reinforcement Learning