Advances in Large Language Models and Social Cognition

The field of large language models (LLMs) is rapidly evolving, with a growing focus on understanding their cognitive and social capabilities. Recent research has explored the ability of LLMs to exhibit human-like personality traits, with findings suggesting that these traits are dynamic and input-driven. Additionally, studies have investigated the role of concept incongruence in LLMs, highlighting the need for improved consistency in model behavior under incongruent conditions. Another area of interest is the simulation of prosocial behavior in LLM agents, with research demonstrating the potential for these agents to exhibit stable and context-sensitive prosocial behavior. Furthermore, the development of frameworks for modeling inequality in complex networks of strategic agents has shed light on the complex drivers of inequality in these systems. Noteworthy papers include:

  • The Way We Prompt, which proposes prompt engineering as a scientific method for probing the deep structure of meaning, and
  • Simulating Prosocial Behavior and Social Contagion in LLM Agents under Institutional Interventions, which presents a simulation framework for examining prosocial behavior in LLM-based agents.

Sources

The Way We Prompt: Conceptual Blending, Neural Dynamics, and Prompt-Induced Transitions in LLMs

A Comparative Study of Large Language Models and Human Personality Traits

Concept Incongruence: An Exploration of Time and Death in Role Playing

Simulating Prosocial Behavior and Social Contagion in LLM Agents under Institutional Interventions

Modeling Inequality in Complex Networks of Strategic Agents using Iterative Game-Theoretic Transactions

Built with on top of