The field of natural language processing is moving towards more personalized and user-centric approaches. Researchers are exploring ways to decouple content generation from personalization, allowing for more precise control and higher quality outputs. This shift is driven by the need to improve user engagement and motivation, particularly in applications such as language learning and creative writing.
Noteworthy papers in this area include: Reflective Personalization Optimization, which proposes a novel framework for personalizing black-box large language models. One-Topic-Doesn't-Fit-All, which develops a structured content transcreation pipeline for generating personalized English reading comprehension tests. LiteraryTaste, which introduces a dataset of reading preferences to facilitate developing personalized creative writing models. BIG5-TPoT, which presents a strategy for predicting personality traits from generated texts using targeted preselection of texts.