Developments in Pragmatics and Natural Language Processing

The field of pragmatics and natural language processing is moving towards a more probabilistic and Bayesian approach, with a focus on modeling complex communicative exchanges and phenomena. This is evident in the increasing use of Bayesian probability theory and the development of new computational tools and methods for probabilistic computation. The application of these approaches to relevance-theoretic pragmatics and the study of conversational implicatures is a notable trend. Additionally, there is a growing interest in the use of large language models to optimize information design and framing, as well as to detect promotional language in scientific research. Noteworthy papers include: Conversational Implicatures: Modelling Relevance Theory Probabilistically, which explores the application of Bayesian probability theory to relevance-theoretic pragmatics. Do Repetitions Matter? Strengthening Reliability in LLM Evaluations, which highlights the importance of repetition in evaluating large language models. Hype or not? Formalizing Automatic Promotional Language Detection in Biomedical Research, which introduces a new task of detecting promotional language in scientific research. Information Design With Large Language Models, which formalizes a language-based notion of framing and bridges it to the Bayesian-persuasion model. Designing Inferable Signaling Schemes for Bayesian Persuasion, which studies the setting where the receiver infers the scheme from repeated interactions.

Sources

Conversational Implicatures: Modelling Relevance Theory Probabilistically

Do Repetitions Matter? Strengthening Reliability in LLM Evaluations

Hype or not? Formalizing Automatic Promotional Language Detection in Biomedical Research

Information Design With Large Language Models

Designing Inferable Signaling Schemes for Bayesian Persuasion

Built with on top of