The field of large language models (LLMs) is rapidly evolving, with a growing focus on understanding their cognitive and social capabilities. Recent research has explored the ability of LLMs to exhibit human-like personality traits, with findings suggesting that these traits are dynamic and input-driven. Additionally, studies have investigated the role of concept incongruence in LLMs, highlighting the need for improved consistency in model behavior under incongruent conditions. Another area of interest is the simulation of prosocial behavior in LLM agents, with research demonstrating the potential for these agents to exhibit stable and context-sensitive prosocial behavior. Furthermore, the development of frameworks for modeling inequality in complex networks of strategic agents has shed light on the complex drivers of inequality in these systems. Noteworthy papers include:
- The Way We Prompt, which proposes prompt engineering as a scientific method for probing the deep structure of meaning, and
- Simulating Prosocial Behavior and Social Contagion in LLM Agents under Institutional Interventions, which presents a simulation framework for examining prosocial behavior in LLM-based agents.