The field of Large Language Models (LLMs) is moving towards improving the alignment of these models with human values and personalities. Researchers are exploring various approaches to simulate individualized human value systems, including the generation of personal backstories and the use of occupational personas. The incorporation of persona information into LLMs has been shown to improve the consistency and diversity of their outputs, but also raises concerns about potential biases and stereotypes. Noteworthy papers in this area include: ValueSim, which presents a framework for simulating individual values through the generation of personal backstories, and IP-Dialog, which proposes a novel approach for automatic synthetic data generation to evaluate implicit personalization in dialogue systems. Overall, the development of persona-driven LLMs has the potential to enable more effective and personalized human-computer interaction, but requires careful consideration of the ethical implications and potential risks.