Advances in Human-AI Collaboration and Social Simulation

The field of human-AI collaboration and social simulation is rapidly advancing, with a focus on developing more sophisticated and realistic models of human behavior. Recent research has highlighted the importance of integrating large language models (LLMs) into social simulation frameworks, enabling the creation of more realistic and dynamic models of human interaction. The use of LLMs has been shown to improve the accuracy and effectiveness of social simulations, particularly in areas such as cooperation, trust, and decision-making. Additionally, researchers are exploring the potential of LLMs to reason about trust and to induce trust in human-AI interactions. Noteworthy papers in this area include the proposal of the Psychological-mechanism Agent framework, which simulates human behavior based on the Cognitive Triangle, and the development of the AGORA framework, which enables collaborative ensemble learning and achieves state-of-the-art performance on mathematical benchmarks. Other notable papers include the validation of generative agent-based models of social norm enforcement and the investigation of social influence dynamics with LLM-based multi-agent simulations.

Sources

People Are Highly Cooperative with Large Language Models, Especially When Communication Is Possible or Following Human Interaction

Improving the State of the Art for Training Human-AI Teams: Technical Report #5 -- Individual Differences and Team Qualities to Measure in a Human-AI Teaming Testbed

Procedural city modeling

Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges

Simulating Human Behavior with the Psychological-mechanism Agent: Integrating Feeling, Thought, and Action

MLC-Agent: Cognitive Model based on Memory-Learning Collaboration in LLM Empowered Agent Simulation Environment

Bridging the Gap: Enhancing News Interpretation Across Diverse Audiences with Large Language Models

Can LLMs Reason About Trust?: A Pilot Study

AGORA: Incentivizing Group Emergence Capability in LLMs via Group Distillation

Games Agents Play: Towards Transactional Analysis in LLM-based Multi-Agent Systems

Towards Cognitive Synergy in LLM-Based Multi-Agent Systems: Integrating Theory of Mind and Critical Evaluation

Validating Generative Agent-Based Models of Social Norm Enforcement: From Replication to Novel Predictions

Towards Simulating Social Influence Dynamics with LLM-based Multi-agents

Bifr\"{o}st: Spatial Networking with Bigraphs

Knowledge Is More Than Performance: How Knowledge Diversity Drives Human-Human and Human-AI Interaction Synergy and Reveals Pure-AI Interaction Shortfalls

A survey of multi-agent geosimulation methodologies: from ABM to LLM

Built with on top of