The field of human-machine communication is undergoing significant changes with the emergence of large language models (LLMs). Researchers are reevaluating traditional pragmatic theories and exploring new frameworks to better understand the dynamic interface between humans and machines. The focus is shifting from human-centered approaches to more machine-centered and probabilistic models, such as the Rational Speech Act framework. This shift is driven by the need to address the challenges posed by LLMs, including the problem of atypicality, context frustration, and the lack of reliability in simulating human psychology. Noteworthy papers in this area include: The paper 'Pragmatics beyond humans: meaning, communication, and LLMs' which proposes the Human-Machine Communication framework as a more suitable alternative to traditional semiotic trichotomy. The paper 'The Problem of Atypicality in LLM-Powered Psychiatry' which introduces the concept of dynamic contextual certification to address the structural risk of atypicality in LLMs. The paper 'COMPEER: Controllable Empathetic Reinforcement Reasoning for Emotional Support Conversation' which proposes controllable empathetic reasoning to enhance emotional support ability in conversational models.