Personalization and Robustness in Large Language Models

The field of large language models (LLMs) is moving towards increased personalization, with a focus on adapting responses to individual user needs and preferences. However, this trend also raises concerns about fairness, morality, and the need for regulation. Recent studies have highlighted the importance of robustness in LLMs, particularly in the context of personalization, where models must balance factual accuracy with user alignment. Noteworthy papers in this area include: Personalised Pricing: The Demise of the Fixed Price?, which discusses the implications of online price discrimination and the need for regulation. A Mega-Study of Digital Twins Reveals Strengths, Weaknesses and Opportunities for Further Improvement, which investigates the effectiveness of digital twins in capturing individual responses. Pathways of Thoughts: Multi-Directional Thinking for Long-form Personalized Question Answering, which proposes a novel method for personalized question answering. Benchmarking and Improving LLM Robustness for Personalized Generation, which introduces a framework for evaluating robustness in LLMs and proposes a two-stage approach to improve robustness.

Sources

Personalised Pricing: The Demise of the Fixed Price?

A Mega-Study of Digital Twins Reveals Strengths, Weaknesses and Opportunities for Further Improvement

Pathways of Thoughts: Multi-Directional Thinking for Long-form Personalized Question Answering

Benchmarking ChatGPT and DeepSeek in April 2025: A Novel Dual Perspective Sentiment Analysis Using Lexicon-Based and Deep Learning Approaches

Benchmarking and Improving LLM Robustness for Personalized Generation

The Inadequacy of Offline LLM Evaluations: A Need to Account for Personalization in Model Behavior

Built with on top of