Advances in Social Intelligence and Multi-Agent Systems

The field of artificial intelligence is moving towards developing more socially aware systems that can effectively navigate complex social interactions. Recent research has focused on creating novel representation formalisms, such as structured social world models, to improve the ability of AI systems to reason about social dynamics. Additionally, there has been a surge in interest in multi-agent systems, with researchers exploring the use of large language models (LLMs) to facilitate collaboration and cooperation among agents. Noteworthy papers in this area include those that propose innovative frameworks for LLM-based multi-agent collaboration, such as COCORELI and OSC, which demonstrate significant improvements in task performance and communication efficiency. Other notable works include the development of ProToM, a Theory of Mind-informed facilitator that promotes prosocial behavior in multi-agent systems, and the introduction of Tree of Agents, a multi-agent reasoning framework that enhances the long-context capabilities of LLMs. These advances have the potential to enable the creation of more sophisticated and human-like AI systems that can effectively interact and cooperate with humans.

Sources

Social World Models

Quantum-like Coherence Derived from the Interaction between Chemical Reaction and Its Environment

Nano Machine Intelligence: From a Communication Perspective

LLMs and their Limited Theory of Mind: Evaluating Mental State Annotations in Situated Dialogue

Prebiotic Functional Programs: Endogenous Selection in an Artificial Chemistry

The evolution of trust as a cognitive shortcut in repeated interactions

COCORELI: Cooperative, Compositional Reconstitution \& Execution of Language Instructions

Emergent Social Dynamics of LLM Agents in the El Farol Bar Problem

Collaboration and Conflict between Humans and Language Models through the Lens of Game Theory

OSC: Cognitive Orchestration through Dynamic Knowledge Alignment in Multi-Agent LLM Collaboration

ToM-SSI: Evaluating Theory of Mind in Situated Social Interactions

ProToM: Promoting Prosocial Behaviour via Theory of Mind-Informed Feedback

Plantbot: Integrating Plant and Robot through LLM Modular Agent Networks

Newton to Einstein: Axiom-Based Discovery via Game Design

Orchestrator: Active Inference for Multi-Agent Systems in Long-Horizon Tasks

Let's Roleplay: Examining LLM Alignment in Collaborative Dialogues

Tree of Agents: Improving Long-Context Capabilities of Large Language Models through Multi-Perspective Reasoning

Componentization: Decomposing Monolithic LLM Responses into Manipulable Semantic Units

Built with on top of