Advances in Human-Machine Communication and Large Language Models

The field of human-machine communication is undergoing significant changes with the emergence of large language models (LLMs). Researchers are reevaluating traditional pragmatic theories and exploring new frameworks to better understand the dynamic interface between humans and machines. The focus is shifting from human-centered approaches to more machine-centered and probabilistic models, such as the Rational Speech Act framework. This shift is driven by the need to address the challenges posed by LLMs, including the problem of atypicality, context frustration, and the lack of reliability in simulating human psychology. Noteworthy papers in this area include: The paper 'Pragmatics beyond humans: meaning, communication, and LLMs' which proposes the Human-Machine Communication framework as a more suitable alternative to traditional semiotic trichotomy. The paper 'The Problem of Atypicality in LLM-Powered Psychiatry' which introduces the concept of dynamic contextual certification to address the structural risk of atypicality in LLMs. The paper 'COMPEER: Controllable Empathetic Reinforcement Reasoning for Emotional Support Conversation' which proposes controllable empathetic reasoning to enhance emotional support ability in conversational models.

Sources

Pragmatics beyond humans: meaning, communication, and LLMs

Analysis and Constructive Criticism of the Official Data Protection Impact Assessment of the German Corona-Warn-App

The Problem of Atypicality in LLM-Powered Psychiatry

Large Language Models Do Not Simulate Human Psychology

Exploring Safety Alignment Evaluation of LLMs in Chinese Mental Health Dialogues via LLM-as-Judge

COMPEER: Controllable Empathetic Reinforcement Reasoning for Emotional Support Conversation

Digital Contact Tracing: Examining the Effects of Understanding and Release Organization on Public Trust

Psyche-R1: Towards Reliable Psychological LLMs through Unified Empathy, Expertise, and Reasoning

Built with on top of