The field of personalized text generation and information retrieval is moving towards more effective and user-centric approaches. Researchers are exploring new evaluation metrics and frameworks that can better assess the quality and personalization of generated text, such as ensemble methods and reference-free evaluation frameworks. Additionally, there is a growing interest in developing adaptive personalization methods that can dynamically incorporate user profiles and conversational context into search queries. Another trend is the use of natural language feedback and contrastive reward optimization to improve the personalization of question answering systems. Noteworthy papers include: Evaluating Style-Personalized Text Generation, which provides conclusive evidence to adopt ensemble of diverse evaluation metrics. PREF introduces a reference-free evaluation framework that jointly measures general output quality and user-specific alignment. PrLM proposes a reinforcement learning framework that trains LLMs to explicitly reason over retrieved user profiles. Adaptive Personalized Conversational Information Retrieval proposes an adaptive personalization method that identifies the required personalization level for a query and integrates personalized queries with other query reformulations. Learning from Natural Language Feedback for Personalized Question Answering introduces a novel framework that replaces scalar rewards with natural language feedback.