The field of diffusion models is moving towards improving efficiency and reducing computational costs. Researchers are exploring novel approaches to accelerate sampling, such as shared sampling schemes, cascaded generation, and discrete-time processes. These innovations have led to significant improvements in sampling speed and quality, making diffusion models more practical for real-world applications. Notably, the use of autoguidance, low-resolution conditioning, and semantic-aware sampling has shown promising results in reducing sampling costs and improving generation quality. In the area of speech enhancement, one-step generative modeling and learnable sampler distillation have emerged as effective techniques for reducing inference latency and improving speech quality. Overall, the field is witnessing a shift towards more efficient, effective, and scalable diffusion models and speech enhancement methods. Noteworthy papers include: LowDiff, which proposes a novel diffusion framework for efficient sampling with low-resolution conditioning. SAGE, which introduces a semantic-aware shared sampling framework for efficient diffusion. ArtiFree, which systematically studies artifact prediction and reduction in diffusion-based speech enhancement. Learnable Sampler Distillation, which proposes a novel approach to train fast and high-fidelity samplers for discrete diffusion models.