The field of large language models (LLMs) is rapidly advancing, with a focus on improving mathematical reasoning and problem-solving capabilities. Recent developments have led to the creation of large-scale datasets, such as the Open Proof Corpus, which enable the evaluation and improvement of LLMs in mathematical proof generation. Additionally, hybrid approaches that combine rule-based systems with LLMs have shown promise in automatic generation of mathematical conjectures. LLMs are also being applied to various domains, including network optimization, UAV control, and theorem proving, with notable successes in generating human-like action sequences and solving stochastic modeling problems. However, challenges remain, such as addressing cultural gaps in mathematical problem presentation and improving the reliability of LLM-driven systems. Noteworthy papers include the introduction of LeanConjecturer, a pipeline for automatic generation of mathematical conjectures, and Bourbaki, a modular system for theorem proving that achieves state-of-the-art results on university-level problems. Furthermore, the development of frameworks such as RALLY and NL2FLOW demonstrates the potential for LLMs to be used in complex problem-solving tasks, such as role-adaptive navigation and parametric problem generation.