The field is moving towards more nuanced and multi-perspective approaches to natural language processing, with a focus on capturing human disagreement and subjectivity. Large language models are being fine-tuned to incorporate user feedback and behavioral data, enabling more accurate evaluations of usefulness and user satisfaction. Additionally, there is a growing interest in optimizing the presentation of search and recommendation results to enhance user experience and engagement. Noteworthy papers include:
- Bridging the Gap: In-Context Learning for Modeling Human Disagreement, which demonstrates the viability of multi-perspective generation in zero-shot settings.
- Leveraging LLMs to Evaluate Usefulness of Document, which introduces a new user-centric evaluation framework that integrates users' search context and behavioral data into LLMs.
- Enhanced Whole Page Optimization via Mixed-Grained Reward Mechanism-Adapted Language Models, which proposes a reward-based fine-tuning approach to optimize the presentation of search and recommendation results.