The field of preference learning and behavior modeling is moving towards more dynamic and adaptable approaches. Researchers are shifting away from traditional methods that rely on static and manual behavior vocabulary selection, and instead are developing novel pretraining strategies that can automatically construct supervision embeddings. These advancements have shown promising results in improving model performance and reducing bias in online preference learning. Furthermore, the introduction of configurable preference tuning and generative reward models has enabled more fine-grained control and adaptability in modeling human preferences. Notable papers include Bootstrapping Your Behavior, which proposes a novel pretraining strategy that achieves an average improvement of 3.9% in AUC and 98.9% in training throughput, and Configurable Preference Tuning with Rubric-Guided Synthetic Data, which introduces a framework for endowing language models with the ability to dynamically adjust their behavior based on explicit, human-interpretable directives.