Advancements in Large Language Model Personalization and Social Simulation

The field of large language models is moving towards more personalized and human-like interactions, with a focus on developing models that can simulate social behaviors and exhibit consistent personality traits. Researchers are exploring new methods for conditioning language models with controllable personality traits, improving persona consistency in dialogue generation, and developing frameworks for designing large language model agents to pilot social experiments. Notable papers in this area include: Scaling Personality Control in LLMs with Big Five Scaler Prompts, which presents a prompt-based framework for conditioning large language models with controllable Big Five personality traits. Score Before You Speak: Improving Persona Consistency in Dialogue Generation using Response Quality Scores, which proposes a novel framework that outperforms previous methods and yields improvements for both million and billion-parameter models. Exploring Large Language Model Agents for Piloting Social Experiments, which develops a framework grounded in well-established social science theories and practices, consisting of three key elements: large language model-driven experimental agents, methods for implementing various interventions or treatments, and tools for collecting behavioral, survey, and interview data.

Sources

Scaling Personality Control in LLMs with Big Five Scaler Prompts

Score Before You Speak: Improving Persona Consistency in Dialogue Generation using Response Quality Scores

Exploring Large Language Model Agents for Piloting Social Experiments

IROTE: Human-like Traits Elicitation of Large Language Model via In-Context Self-Reflective Optimization

Simulating Generative Social Agents via Theory-Informed Workflow Design

CPO: Addressing Reward Ambiguity in Role-playing Dialogue via Comparative Policy Optimization

PersonaEval: Are LLM Evaluators Human Enough to Judge Role-Play?

Built with on top of