Advancing Large Language Model Creativity

The field of natural language generation is moving towards enhancing the creative capabilities of Large Language Models (LLMs). Recent research has focused on developing methods that can generate more novel, diverse, and surprising content while maintaining high output quality. One promising direction is the injection of signals from multiple creativity dimensions into the preference optimization objective, allowing for a more generalizable approach to enhancing LLM creativity. Another area of innovation is the use of multilingual prompting, which can increase diversity by activating a broader range of cultural knowledge embedded in model training data. Furthermore, addressing length bias in diversity metrics and reward models is crucial for improving response diversity without compromising output quality. Noteworthy papers include: Creative Preference Optimization, which proposes a novel alignment method that injects signals from multiple creativity dimensions into the preference optimization objective. Multilingual Prompting for Improving LLM Generation Diversity, which consistently outperforms existing diversity-enhancing techniques. Diverse, not Short, which introduces a length-controlled self-learning framework that improves response diversity while maintaining length parity.

Sources

Creative Preference Optimization

Multilingual Prompting for Improving LLM Generation Diversity

Diverse, not Short: A Length-Controlled Self-Learning Framework for Improving Response Diversity of Language Models

Exploring the Relationship Between Diversity and Quality in Ad Text Generation

Built with on top of