Deliberation and Human-Centered AI Systems

The field of deliberative social choice and human-centered AI systems is moving towards the development of more sophisticated and reliable methods for collective decision-making and communication. Researchers are exploring the use of large language models (LLMs) as annotators, evaluators, and intermediaries in various domains, including social sciences, education, and professional settings. The focus is on designing frameworks and protocols that can facilitate authentic and trustworthy interactions between humans and AI systems. Notable advancements include the development of novel distortion bounds, dataset-agnostic evaluation frameworks, and role-based frameworks for human-centered LLM support systems. These innovations have the potential to improve the accuracy, reliability, and transparency of AI-generated content and human-AI interactions.

Noteworthy papers include: The paper on deliberation via matching introduces a novel protocol that achieves a tight distortion bound of 3, outperforming previous lower bounds. The paper on LLM-based annotation proposes a consistent and reliable method for annotating subjective tasks, demonstrating improved test set performance for models trained on LLM-generated annotations. The paper on the computational Turing test reveals systematic differences between human and AI language, highlighting the need for more robust validation frameworks and calibration strategies.

Sources

Deliberation via Matching

Towards Consistent Detection of Cognitive Distortions: LLM-Based Annotation and Dataset-Agnostic Evaluation

Beyond Chat: a Framework for LLMs as Human-Centered Support Systems

Trustworthy LLM-Mediated Communication: Evaluating Information Fidelity in LLM as a Communicator (LAAC) Framework in Multiple Application Domains

Computational Turing Test Reveals Systematic Differences Between Human and AI Language

Question the Questions: Auditing Representation in Online Deliberative Processes

Built with on top of