Advances in Temporal Information Processing

The field of temporal information processing is experiencing a significant shift towards leveraging large language models (LLMs) and innovative frameworks to improve the accuracy and interpretability of temporal data. Researchers are exploring the application of LLMs in various domains, including traffic signal control, time normalization, and temporal information retrieval. A key direction in this field is the development of novel methods that can effectively integrate symbolic knowledge into data-driven learning algorithms, enabling continuous learning and optimization directly in the semantic space of formulae. Noteworthy papers in this area include: Chat2SPaT, which proposes a large language model-based tool for automating traffic signal control plan management, achieving an accuracy of over 94% in plan generation. A Semantic Parsing Framework for End-to-End Time Normalization, which introduces a novel formulation of time normalization as a code generation task and demonstrates strong performance using small, locally deployable models. Temporal Information Retrieval via Time-Specifier Model Merging, which proposes a method that enhances temporal retrieval while preserving accuracy on non-temporal queries. Bridging Logic and Learning: Decoding Temporal Logic Embeddings via Transformers, which tackles the issue of inverting semantic embeddings of Signal Temporal Logic formulae using a Transformer-based decoder-only model.

Sources

Chat2SPaT: A Large Language Model Based Tool for Automating Traffic Signal Control Plan Management

A Semantic Parsing Framework for End-to-End Time Normalization

Temporal Information Retrieval via Time-Specifier Model Merging

Bridging Logic and Learning: Decoding Temporal Logic Embeddings via Transformers

Built with on top of