The field of reinforcement learning for large language models is moving towards more efficient and effective methods for training and optimization. Researchers are exploring new approaches to leverage sequential environmental feedback, multi-step decision-making, and process-level supervision to improve the reasoning capabilities of large language models. Noteworthy papers in this area include UloRL, which proposes an ultra-long output reinforcement learning approach for advancing large language models' reasoning abilities, and RLVMR, which introduces a novel framework that integrates dense, process-level supervision into end-to-end RL. Additionally, papers such as MoL-RL and Post-Completion Learning demonstrate the potential of leveraging multi-step textual feedback and post-completion space to enhance LLMs' reasoning capabilities. These innovative methods are showing promising results in improving the performance and robustness of large language models in various tasks and domains.