The field of image restoration and generation is rapidly advancing, with a focus on developing more efficient and effective methods for restoring high-quality images from degraded or low-quality inputs. Recent research has emphasized the importance of incorporating visual instructions, boundary conditions, and adaptive multi-scale techniques to improve the accuracy and robustness of image restoration models. Additionally, the use of diffusion-based models and transformers has shown great promise in improving the quality of degraded images. These innovative approaches have achieved state-of-the-art results across various image restoration benchmarks, offering practical and efficient solutions for real-world applications. Noteworthy papers in this area include: Improving Rectified Flow with Boundary Conditions, which proposes a novel boundary-enforced rectified flow model that improves performance over vanilla rectified flow models. MoiréXNet, which introduces a hybrid MAP-based framework for image and video demoiréing that integrates supervised learning with generative models. Visual-Instructed Degradation Diffusion for All-in-One Image Restoration, which proposes a novel all-in-one image restoration framework that utilizes visual instruction-guided degradation diffusion. Reversing Flow for Image Restoration, which proposes a novel image restoration framework that models the degradation process as a deterministic path using continuous normalizing flows. TDiR: Transformer based Diffusion for Image Restoration Tasks, which develops a transformer-based diffusion model for image restoration tasks. Learning to See in the Extremely Dark, which proposes a paired-to-paired data synthesis pipeline and a diffusion-based framework for extremely low-light RAW image enhancement.