The field of diffusion models is rapidly advancing, with a focus on improving efficiency and effectiveness in generation and repair tasks. Recent developments have explored the use of diffusion models for code repair, leveraging their ability to generate code by iteratively removing noise from latent representations. Additionally, researchers have investigated the application of diffusion models to reinforcement learning, using them to generate synthetic data and improve generalization. Notable papers in this area include: Self-Guided Action Diffusion, which introduces a more efficient variant of bidirectional decoding for diffusion-based policies, achieving near-optimal performance at negligible inference cost. MDPO: Overcoming the Training-Inference Divide of Masked Diffusion Language Models, which proposes a novel framework to address the discrepancy between training and inference in diffusion language models, resulting in improved performance and efficiency. DPad: Efficient Diffusion Language Models with Suffix Dropout, which presents a training-free method to restrict attention to nearby suffix tokens, preserving fidelity while eliminating redundancy and achieving significant speedups. Pretrained Diffusion Models Are Inherently Skipped-Step Samplers, which demonstrates that pretrained diffusion models can achieve accelerated sampling via skipped-step sampling, an intrinsic property of these models.