The field of artificial intelligence is witnessing significant advancements in neuro-symbolic systems, particularly in the realm of mathematical reasoning. Recent developments have focused on designing architectures that can learn to execute symbolic algorithms, achieving strong generalization and out-of-distribution performance. A key direction is the integration of neural systems with symbolic methods, enabling the creation of more robust and efficient models. Notably, multi-stage optimization frameworks and novel training methods have been proposed to improve the performance of large language models on complex mathematical problems. These innovations have led to state-of-the-art results on various benchmarks, demonstrating the potential of neuro-symbolic approaches to advance mathematical reasoning capabilities. Noteworthy papers include: JT-Math, which introduces a multi-stage framework for advanced mathematical reasoning in large language models, achieving state-of-the-art results among open-source models of similar size. SAND-Math, which presents a pipeline for generating novel, difficult, and useful mathematics questions and answers, significantly boosting performance on the AIME25 benchmark.