The field of Large Language Model (LLM) research is moving towards a more nuanced understanding of the complex interactions between models, users, and tasks. Recent studies have highlighted the importance of considering personality traits, demographic attributes, and cultural context in the development and evaluation of LLMs. The use of synthetic personae and multimodal foundation models has shown promise in improving the accuracy and fairness of demographic inference and biomedical summarization. However, challenges remain in mitigating biases, ensuring representativeness, and addressing safety concerns in AI character platforms. Noteworthy papers in this area include: Mitigating the Threshold Priming Effect in Large Language Model-Based Relevance Judgments via Personality Infusing, which proposes personality prompting as a method to mitigate threshold priming. Benchmarking and Understanding Safety Risks in AI Character Platforms, which conducts a large-scale safety study of AI character platforms and demonstrates a predictive capability to identify less safe characters. Demographic Inference from Social Media Data with Multimodal Foundation Models, which leverages a state-of-the-art multimodal foundation model to infer age, gender, and race from social media profiles with high accuracy.
Advancements in Large Language Model Research
Sources
Mitigating the Threshold Priming Effect in Large Language Model-Based Relevance Judgments via Personality Infusing
In Silico Development of Psychometric Scales: Feasibility of Representative Population Data Simulation with LLMs