Large Language Model Agents in Complex Systems

The field of large language model (LLM) agents is moving towards more principled and systematic approaches to modeling complex systems. Recent developments have focused on designing architectures that can capture the cognitive components of agents, such as memory and tools, and enable the analysis of how these components influence collective behavior. This has led to more realistic and informative simulations, which can be used to inform policy decisions. Another key area of research is the development of frameworks that can handle complex execution problems with flexible time boundaries and multiple constraints. These frameworks have the potential to optimize execution paths and improve performance in a variety of scenarios. Noteworthy papers include:

  • The introduction of Shachi, a formal methodology and modular framework for LLM agents, which provides a rigorous foundation for building and evaluating these agents.
  • The development of Large Execution Models (LEMs), a novel deep learning framework that extends transformer-based architectures to address complex execution problems.
  • The proposal of an unbiased collective memory design for LLM-based agentic 6G cross-domain management, which can mitigate cognitive distortions and improve negotiation outcomes.

Sources

Reimagining Agent-based Modeling with Large Language Model Agents via Shachi

What Makes LLM Agent Simulations Useful for Policy? Insights From an Iterative Design Engagement in Emergency Preparedness

LEMs: A Primer On Large Execution Models

Toward an Unbiased Collective Memory for Efficient LLM-Based Agentic 6G Cross-Domain Management

Built with on top of