Personalization in Human-Centric AI

The field of human-centric AI is shifting towards personalized models that can capture individual preferences and decision-making processes. Recent developments focus on adapting reward models to specific individuals or groups, enabling more accurate predictions and alignments with human values. This is achieved through innovative methods such as latent embedding adaptation, low-rank adaptation, and representation learning. These approaches allow for efficient learning of personalized reward models, even with limited local datasets, and can outperform existing solutions. Notable papers include:

  • Capturing Individual Human Preferences with Reward Features, which proposes a method to specialise a reward model to a person or group of people.
  • A Shared Low-Rank Adaptation Approach to Personalized RLHF, which introduces Low-Rank Adaptation into the personalized RLHF framework to enable efficient learning of personalized reward models.
  • Learning to Represent Individual Differences for Choice Decision Making, which demonstrates the use of representation learning to measure individual differences from behavioral experiment data.

Sources

Capturing Individual Human Preferences with Reward Features

Latent Embedding Adaptation for Human Preference Alignment in Diffusion Planners

A Shared Low-Rank Adaptation Approach to Personalized RLHF

Learning to Represent Individual Differences for Choice Decision Making

Built with on top of