The field of large language models (LLMs) is moving towards addressing the challenges of temporal reasoning and constraint processing. Recent studies have highlighted the limitations of current LLM architectures in handling temporal constraints, including issues with prompt brittleness, action bias, and the lack of reliable temporal state representation. To overcome these limitations, researchers are exploring the development of hybrid architectures that incorporate symbolic reasoning modules. Noteworthy papers in this area include: Empirical Characterization of Temporal Constraint Processing in LLMs, which reveals systematic deployment risks in LLMs and demonstrates the need for architectural mechanisms to support continuous temporal state representation and explicit constraint checking. Do Large Language Models Understand Chronology, which investigates the ability of LLMs to understand chronology and found that allocating explicit reasoning budget helps with chronological ordering.