The field of remote sensing and image processing is witnessing significant advancements with the application of generative models, particularly diffusion-based models. These models have shown tremendous potential in tackling complex tasks such as image super-resolution, semantic segmentation, and weather forecasting. Researchers are exploring the integration of generative models with other techniques, such as discriminative learning, to improve the accuracy and robustness of their models. The use of diffusion models has enabled the retrieval of high-resolution images from low-resolution inputs, and has also shown promise in capturing high-frequency features for semantic segmentation tasks. Noteworthy papers in this area include 'Lightning the Night with Generative Artificial Intelligence', which pioneers the use of generative diffusion models for retrieving visible light reflectance at night, and 'Controllable Reference-Based Real-World Remote Sensing Image Super-Resolution with Generative Diffusion Priors', which proposes a novel controllable reference-based diffusion model for real-world remote sensing image super-resolution.