The field of Large Language Models (LLMs) is rapidly advancing, with a focus on improving their capabilities in optimization and formalization tasks. Recent research has demonstrated the potential of LLMs in reconstructing optimal schedules directly from natural language, although most models still struggle with precise timing, data transfer arithmetic, and dependency enforcement. The use of LLMs in autoformalization has also shown promise, with the development of novel frameworks and methods that enable the translation of informal statements into formal logic. Additionally, LLM-assisted formalization has been successfully applied to detect statutory inconsistency in complex law, highlighting its potential for ensuring the fidelity and logical consistency of LLM-generated outputs. Noteworthy papers in this area include: Evaluating Large Language Models for Workload Mapping and Scheduling in Heterogeneous HPC Systems, which evaluated the capability of LLMs in combinatorial optimization. LLM-Assisted Formalization Enables Deterministic Detection of Statutory Inconsistency in the Internal Revenue Code, which introduced a hybrid neuro-symbolic framework for detecting inconsistent provisions in complex law. Improving Autoformalization Using Direct Dependency Retrieval, which proposed a novel retrieval-augmented framework for statement autoformalization.