The field of Large Language Models (LLMs) is moving towards more robust and accurate methods for tabular reasoning and query optimization. Recent developments have focused on improving the ability of LLMs to generate and execute SQL queries, as well as to evaluate the semantic equivalence of generated SQL. Additionally, there is a growing interest in using LLMs for domain-specific applications, such as nuclear power plants, where data privacy and security are of utmost importance. Noteworthy papers in this area include: LLM-Symbolic Integration for Robust Temporal Tabular Reasoning, which introduces a synthetic dataset and a symbolic intermediate representation to enhance generalization and mitigate biases. TableRAG: A Retrieval Augmented Generation Framework for Heterogeneous Document Reasoning, which proposes a hybrid framework that unifies textual understanding and complex manipulations over tabular data, establishing a new state-of-the-art for heterogeneous document question answering.