Personalization and Human-AI Interaction in Language Models

The field of natural language processing is moving towards more personalized and interactive language models. Researchers are exploring ways to effectively tailor language models to individual users, taking into account their preferences, behaviors, and cognitive styles. This includes developing methods to inject user-specific information into pre-trained language models, designing tools that prioritize transparency, equity, and usability, and creating frameworks for systematically studying syntactic variation across languages and modalities. Noteworthy papers in this area include Embedding-to-Prefix, which proposes a parameter-efficient method for personalizing language models, and What Needs Attention?, which identifies factors that influence developers' trust and adoption of generative AI tools. Other notable works, such as Modeling and Optimizing User Preferences in AI Copilots and Machine-Facing English, provide comprehensive surveys and taxonomies of preference optimization strategies and define a hybrid register shaped by human-AI discourse.

Sources

Embedding-to-Prefix: Parameter-Efficient Personalization for Pre-Trained Large Language Models

What Needs Attention? Prioritizing Drivers of Developers' Trust and Adoption of Generative AI

Modeling and Optimizing User Preferences in AI Copilots: A Comprehensive Survey and Taxonomy

Counting trees: A treebank-driven exploration of syntactic variation in speech and writing across languages

Machine-Facing English: Defining a Hybrid Register Shaped by Human-AI Discourse

Built with on top of