The field of image restoration and fusion is rapidly advancing with the development of diffusion models. These models have shown powerful potential in generating abundant texture details and achieving superior restoration performance. Recent research has focused on improving the efficiency and adaptability of diffusion models to diverse degradation types and tasks. Notable advancements include the development of conditional latent diffusion frameworks, which enable effective disentanglement of object instances and high-fidelity image fusion. Additionally, the integration of semantic masks and prior information has enhanced the performance of diffusion models in image restoration and fusion tasks. Overall, the field is moving towards more efficient, scalable, and semantically-aware models that can adapt to various tasks and datasets. Noteworthy papers include: Diffusion Once and Done, which proposes an efficient all-in-one image restoration method with superior restoration performance and inference efficiency. TIR-Diffusion, which leverages latent-space representations and wavelet-domain optimization for thermal infrared image denoising and exhibits robust zero-shot generalization to diverse real-world datasets. Uni-DocDiff, which develops a unified and scalable document restoration model based on diffusion with exceptional scalability across diverse tasks. Conditional Latent Diffusion Models for Zero-Shot Instance Segmentation, which presents a novel class of diffusion models designed for object-centric prediction and achieves state-of-the-art performance on multiple challenging real-world benchmarks. SGDFuse, which proposes a conditional diffusion model guided by the Segment Anything Model for high-fidelity infrared and visible image fusion and achieves state-of-the-art performance in both subjective and objective evaluations.