Advances in Human-Like Interactions with Large Language Models

The field of large language models (LLMs) is moving towards more human-like interactions, with a focus on incorporating personality traits and social dynamics. Researchers are exploring ways to evaluate and control the personality expression of LLMs, enabling more nuanced and consistent human-machine interactions. This is achieved through the development of new evaluation frameworks, methods for predicting personality traits from text, and techniques for inducing specific personality traits in LLMs. The goal is to create LLMs that can adapt to diverse human operators and stakeholders, leading to more effective and reliable interactions in various applications. Noteworthy papers include:

  • Exploring Big Five Personality and AI Capability Effects in LLM-Simulated Negotiation Dialogues, which establishes a repeatable evaluation methodology for experimenting with AI agent reliability across diverse operator personalities and human-agent team dynamics.
  • SAC: A Framework for Measuring and Inducing Personality Traits in LLMs with Dynamic Intensity Control, which introduces a structured framework for evaluating and dynamically inducing trait intensity in LLMs, allowing for more expressive control over sixteen distinct traits.

Sources

Exploring Big Five Personality and AI Capability Effects in LLM-Simulated Negotiation Dialogues

Personality Prediction from Life Stories using Language Models

Spotting Out-of-Character Behavior: Atomic-Level Evaluation of Persona Fidelity in Open-Ended Generation

SAC: A Framework for Measuring and Inducing Personality Traits in LLMs with Dynamic Intensity Control

Built with on top of