Personalized Text Generation and Information Retrieval

The field of personalized text generation and information retrieval is moving towards more effective and user-centric approaches. Researchers are exploring new evaluation metrics and frameworks that can better assess the quality and personalization of generated text, such as ensemble methods and reference-free evaluation frameworks. Additionally, there is a growing interest in developing adaptive personalization methods that can dynamically incorporate user profiles and conversational context into search queries. Another trend is the use of natural language feedback and contrastive reward optimization to improve the personalization of question answering systems. Noteworthy papers include: Evaluating Style-Personalized Text Generation, which provides conclusive evidence to adopt ensemble of diverse evaluation metrics. PREF introduces a reference-free evaluation framework that jointly measures general output quality and user-specific alignment. PrLM proposes a reinforcement learning framework that trains LLMs to explicitly reason over retrieved user profiles. Adaptive Personalized Conversational Information Retrieval proposes an adaptive personalization method that identifies the required personalization level for a query and integrates personalized queries with other query reformulations. Learning from Natural Language Feedback for Personalized Question Answering introduces a novel framework that replaces scalar rewards with natural language feedback.

Sources

Evaluating Style-Personalized Text Generation: Challenges and Directions

The ReQAP System for Question Answering over Personal Information

PrLM: Learning Explicit Reasoning for Personalized RAG via Contrastive Reward Optimization

Adaptive Personalized Conversational Information Retrieval

PREF: Reference-Free Evaluation of Personalised Text Generation in LLMs

Learning from Natural Language Feedback for Personalized Question Answering

Built with on top of