The field of image super-resolution and restoration is rapidly advancing with the development of new methods and techniques. One of the key trends is the use of diffusion-based models, which have shown promising results in real-world scenarios. These models are able to effectively capture and model real-world degradation, synthesizing low-resolution images with realistic degradation. Another area of focus is the development of reference-based super-resolution methods, which leverage high-quality reference images to enhance texture fidelity and visual realism. Researchers are also exploring the use of contrastive learning and improved diffusion-based super-resolution models to achieve accurate 3D super-resolution from limited high-resolution data. Additionally, there is a growing interest in developing methods that can effectively reverse convolution and transposed convolution operators, leading to the development of new operators in deep model design and applications. Noteworthy papers in this area include: Sample-aware RandAugment, which proposes an asymmetric, search-free AutoDA method that dynamically adjusts augmentation policies while maintaining straightforward implementation. OMGSR, which presents a universal framework applicable to DDPM/FM-based generative models, injecting the low-quality image latent distribution at a pre-computed mid-timestep to alleviate the latent distribution gap. RASR, which introduces a new and practical RefSR paradigm that automatically retrieves semantically relevant high-resolution images from a reference database given only a low-quality input. Ultra-High-Definition Reference-Based Landmark Image Super-Resolution with Generative Diffusion Prior, which proposes a novel framework that explicitly achieves pattern matching between the low-resolution image and the reference high-resolution image.