Advances in Multi-Agent Cooperation and Artificial Intelligence

The field of artificial intelligence is moving towards more sophisticated and human-like cooperation between agents, with a focus on developing frameworks that enable agents to reason about others' beliefs and goals. This is evident in the development of novel approaches to multi-agent cooperation, such as the use of theory of mind and active inference. These approaches allow agents to infer others' beliefs solely from observable behavior, enabling more effective cooperation and decision-making. Additionally, there is a growing interest in developing more advanced and human-like intelligence in robots, with a focus on creating robots that can learn and adapt in complex environments. Noteworthy papers in this area include: Theory of Mind Using Active Inference, which presents a novel approach to multi-agent cooperation using theory of mind and active inference. Transferring Expert Cognitive Models to Social Robots via Agentic Concept Bottleneck Models, which proposes a framework for transferring expert cognitive models to social robots, enabling them to interpret social exchanges and provide transparent recommendations.

Sources

Theory of Mind Using Active Inference: A Framework for Multi-Agent Cooperation

Intensional FOL over Belnap's Billatice for Strong-AI Robotics

Strategic Hypothesis Testing

Forgive and Forget? An Industry 5.0 Approach to Trust-Fatigue Co-regulation in Human-Cobot Order Picking

What Do Agents Think Others Would Do? Level-2 Inverse Games for Inferring Agents' Estimates of Others' Objectives

Transferring Expert Cognitive Models to Social Robots via Agentic Concept Bottleneck Models

Generic-to-Specific Reasoning and Learning for Scalable Ad Hoc Teamwork

From MAS to MARS: Coordination Failures and Reasoning Trade-offs in Hierarchical Multi-Agent Robotic Systems within a Healthcare Scenario

Built with on top of