Advancements in Large Language Model Multi-Agent Systems

The field of Large Language Model (LLM) multi-agent systems is rapidly evolving, with a focus on developing more scalable, adaptable, and robust architectures. Recent research has emphasized the importance of self-evolution, recursive self-generation, and dynamic communication structures in achieving these goals. Notably, innovative frameworks have been proposed to address the limitations of traditional multi-agent systems, including the use of pyramid-like DAG-based structures, dual-audit mechanisms, and agent self-evolution mechanisms. These advancements have led to significant performance gains in various benchmarks and applications, such as deep research, code generation, and natural language conversation. Noteworthy papers include: InfiAgent, which proposes a self-evolving pyramid agent framework for infinite scenarios, achieving 9.9% higher performance compared to similar frameworks. MAS$^2$, which introduces a self-generative, self-configuring, and self-rectifying multi-agent system, achieving performance gains of up to 19.6% over state-of-the-art systems.

Sources

InfiAgent: Self-Evolving Pyramid Agent Framework for Infinite Scenarios

MAS$^2$: Self-Generative, Self-Configuring, Self-Rectifying Multi-Agent Systems

ScheduleMe: Multi-Agent Calendar Assistant

JoyAgent-JDGenie: Technical Report on the GAIA

Stochastic Self-Organization in Multi-Agent Systems

TUMIX: Multi-Agent Test-Time Scaling with Tool-Use Mixture

AMAS: Adaptively Determining Communication Topology for LLM-based Multi-Agent System

Built with on top of