Advancements in Large Language Models and Personalization

The field of large language models (LLMs) is rapidly evolving, with a focus on improving their ability to simulate human-like behavior, personalize interactions, and adapt to diverse applications. Recent developments have introduced innovative methods for controlling LLM behavior, such as action-aware persona modeling, activation steering, and physics steering, which enable more realistic and effective simulations. Additionally, advancements in online learning, sparse feature selection, and probabilistic hash embeddings have enhanced the capabilities of LLMs in handling complex data streams and categorical features. Noteworthy papers in this area include: Point of Order, which introduces a reproducible pipeline for transforming public Zoom recordings into speaker-attributed transcripts with metadata, enabling more realistic civic simulations. MTA, which proposes a Merge-then-Adapt framework for personalized large language models, achieving state-of-the-art performance across multiple tasks.

Sources

Point of Order: Action-Aware LLM Persona Modeling for Realistic Civic Simulation

How Far Can LLMs Emulate Human Behavior?: A Strategic Analysis via the Buy-and-Sell Negotiation Game

Steering Latent Traits, Not Learned Facts: An Empirical Study of Activation Control Limits

Online Sparse Feature Selection in Data Streams via Differential Evolution

Profile-LLM: Dynamic Profile Optimization for Realistic Personality Expression in LLMs

Online-PVLM: Advancing Personalized VLMs with Online Concept Learning

MTA: A Merge-then-Adapt Framework for Personalized Large Language Model

Interactive AI NPCs Powered by LLMs: Technical Report for the CPDC Challenge 2025

Physics Steering: Causal Control of Cross-Domain Concepts in a Physics Foundation Model

Probabilistic Hash Embeddings for Online Learning of Categorical Features

Built with on top of