The field of generative modeling is witnessing significant advancements with the integration of diffusion models and reinforcement learning. Researchers are exploring innovative approaches to improve the stability and quality of generated samples, particularly in molecular generation and image synthesis. The incorporation of physical feedback and reward functions is enabling the creation of more realistic and physically meaningful structures. Furthermore, the development of unified frameworks for conditional image generation and style-driven generation is enhancing the versatility and effectiveness of these models. Noteworthy papers in this area include: Guiding Diffusion Models with Reinforcement Learning for Stable Molecule Generation, which proposes a novel framework for generating stable molecules using reinforcement learning with physical feedback. Inference-Time Alignment Control for Diffusion Models with Reinforcement Learning Guidance, which introduces a method for dynamically controlling the alignment-quality trade-off in diffusion models using reinforcement learning guidance.