The field of large language models (LLMs) is moving towards more advanced applications in coding and reasoning. Recent work has focused on improving the performance of LLMs in tasks such as RTL code generation, constraint modeling, and symbolic reasoning. A key trend is the development of novel training datasets and fine-tuning strategies that enable LLMs to learn complex reasoning patterns and generalize to new tasks. Another area of research is the integration of LLMs with formal methods, such as satisfiability modulo theories (SMT) constraint solving, to improve the rigidity and reliability of program analysis. Notable papers in this area include ScaleRTL, which introduces a reasoning LLM for RTL coding that achieves state-of-the-art performance, and CP-Bench, which evaluates the modeling capabilities of LLMs for constraint programming. Additionally, papers such as Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models and Automated Synthesis of Formally Verified Multi-Abstraction Function Summaries demonstrate the potential of LLMs to engage in deeper symbolic reasoning and support formal verification.