The field of large language models (LLMs) is experiencing significant advancements in reasoning capabilities, driven by innovative applications of reinforcement learning and knowledge expansion techniques. Researchers are exploring new methods to enhance LLM performance in specific domains, such as finance, and to improve the stability and efficiency of reinforcement learning. Noteworthy developments include the use of multi-stage enhancement frameworks, progressive optimization techniques, and off-policy reinforcement learning. These advancements have led to state-of-the-art performance on various benchmarks, with some models achieving marked improvements over larger and specialist models. Notable papers include: FEVO, which introduces a multi-stage enhancement framework to expand financial domain knowledge and improve LLM performance. From Data-Centric to Sample-Centric, which proposes a framework of progressive optimization techniques to enhance LLM reasoning. Squeeze the Soaked Sponge, which launches a general approach to enable on-policy reinforcement finetuning methods to leverage off-policy data. First Return, Entropy-Eliciting Explore, which proposes a structured exploration framework to identify high-uncertainty decision points and perform targeted rollouts. RLEP, which presents a two-phase framework that collects verified trajectories and replays them during subsequent training to steer the model away from fruitless exploration.