The field of human-AI collaboration and social simulation is rapidly advancing, with a focus on developing more sophisticated and realistic models of human behavior. Recent research has highlighted the importance of integrating large language models (LLMs) into social simulation frameworks, enabling the creation of more realistic and dynamic models of human interaction. The use of LLMs has been shown to improve the accuracy and effectiveness of social simulations, particularly in areas such as cooperation, trust, and decision-making. Additionally, researchers are exploring the potential of LLMs to reason about trust and to induce trust in human-AI interactions. Noteworthy papers in this area include the proposal of the Psychological-mechanism Agent framework, which simulates human behavior based on the Cognitive Triangle, and the development of the AGORA framework, which enables collaborative ensemble learning and achieves state-of-the-art performance on mathematical benchmarks. Other notable papers include the validation of generative agent-based models of social norm enforcement and the investigation of social influence dynamics with LLM-based multi-agent simulations.
Advances in Human-AI Collaboration and Social Simulation
Sources
People Are Highly Cooperative with Large Language Models, Especially When Communication Is Possible or Following Human Interaction
Improving the State of the Art for Training Human-AI Teams: Technical Report #5 -- Individual Differences and Team Qualities to Measure in a Human-AI Teaming Testbed
Simulating Human Behavior with the Psychological-mechanism Agent: Integrating Feeling, Thought, and Action
MLC-Agent: Cognitive Model based on Memory-Learning Collaboration in LLM Empowered Agent Simulation Environment
Towards Cognitive Synergy in LLM-Based Multi-Agent Systems: Integrating Theory of Mind and Critical Evaluation
Validating Generative Agent-Based Models of Social Norm Enforcement: From Replication to Novel Predictions