Temporal Semantics and Natural Language Inference

The field of natural language processing is moving towards a deeper understanding of temporal semantics and its implications on natural language inference. Researchers are exploring the challenges of modeling temporal relationships and reasoning in different languages, including those with limited grammatical forms for tense. The development of new datasets and benchmarks is facilitating the evaluation of large language models and retrieval-augmented generation systems in temporal-sensitive tasks. Noteworthy papers include: LLMs Struggle with NLI for Perfect Aspect, which highlights the limitations of large language models in temporal inference. TComQA: Extracting Temporal Commonsense from Text, which proposes a pipeline for extracting temporal commonsense from text and achieves high precision in evaluating temporal question answering tasks.

Sources

LLMs Struggle with NLI for Perfect Aspect: A Cross-Linguistic Study in Chinese and Japanese

A Question Answering Dataset for Temporal-Sensitive Retrieval-Augmented Generation

Contrastive Analysis of Constituent Order Preferences Within Adverbial Roles in English and Chinese News: A Large-Language-Model-Driven Approach

TComQA: Extracting Temporal Commonsense from Text

Built with on top of