The field of deliberative social choice and human-centered AI systems is moving towards the development of more sophisticated and reliable methods for collective decision-making and communication. Researchers are exploring the use of large language models (LLMs) as annotators, evaluators, and intermediaries in various domains, including social sciences, education, and professional settings. The focus is on designing frameworks and protocols that can facilitate authentic and trustworthy interactions between humans and AI systems. Notable advancements include the development of novel distortion bounds, dataset-agnostic evaluation frameworks, and role-based frameworks for human-centered LLM support systems. These innovations have the potential to improve the accuracy, reliability, and transparency of AI-generated content and human-AI interactions.
Noteworthy papers include: The paper on deliberation via matching introduces a novel protocol that achieves a tight distortion bound of 3, outperforming previous lower bounds. The paper on LLM-based annotation proposes a consistent and reliable method for annotating subjective tasks, demonstrating improved test set performance for models trained on LLM-generated annotations. The paper on the computational Turing test reveals systematic differences between human and AI language, highlighting the need for more robust validation frameworks and calibration strategies.