The field of mathematical expression recognition and reasoning is witnessing significant advancements, driven by innovative applications of self-supervised learning, reinforcement learning, and large language models. Researchers are exploring novel approaches to improve the recognition of handwritten mathematical expressions, such as using progressive spatial masking strategies and context-aware voice-powered math workspaces. Additionally, there is a growing focus on developing compact and efficient language models optimized for specific academic exams, such as the Joint Entrance Examination (JEE). These models are being fine-tuned using curriculum learning and reinforcement learning with verifiable rewards to boost their performance. Noteworthy papers in this area include Mask & Match, which presents a self-supervised learning framework for recognizing handwritten mathematical expressions, and Aryabhata, which introduces a compact 7B parameter math reasoning model optimized for the JEE. We-Math 2.0 is also a notable contribution, as it integrates a structured mathematical knowledge system, model-centric data space modeling, and a reinforcement learning-based training paradigm to enhance the mathematical reasoning abilities of multimodal large language models.