Personalization in Natural Language Processing

The field of natural language processing is moving towards more personalized and user-centric approaches. Researchers are exploring ways to decouple content generation from personalization, allowing for more precise control and higher quality outputs. This shift is driven by the need to improve user engagement and motivation, particularly in applications such as language learning and creative writing.

Noteworthy papers in this area include: Reflective Personalization Optimization, which proposes a novel framework for personalizing black-box large language models. One-Topic-Doesn't-Fit-All, which develops a structured content transcreation pipeline for generating personalized English reading comprehension tests. LiteraryTaste, which introduces a dataset of reading preferences to facilitate developing personalized creative writing models. BIG5-TPoT, which presents a strategy for predicting personality traits from generated texts using targeted preselection of texts.

Sources

Reflective Personalization Optimization: A Post-hoc Rewriting Framework for Black-Box Large Language Models

One-Topic-Doesn't-Fit-All: Transcreating Reading Comprehension Test for Personalized Learning

LiteraryTaste: A Preference Dataset for Creative Writing Personalization

BIG5-TPoT: Predicting BIG Five Personality Traits, Facets, and Items Through Targeted Preselection of Texts

Built with on top of