Advances in Generative Models for Remote Sensing and Image Processing

The field of remote sensing and image processing is witnessing significant advancements with the application of generative models, particularly diffusion-based models. These models have shown tremendous potential in tackling complex tasks such as image super-resolution, semantic segmentation, and weather forecasting. Researchers are exploring the integration of generative models with other techniques, such as discriminative learning, to improve the accuracy and robustness of their models. The use of diffusion models has enabled the retrieval of high-resolution images from low-resolution inputs, and has also shown promise in capturing high-frequency features for semantic segmentation tasks. Noteworthy papers in this area include 'Lightning the Night with Generative Artificial Intelligence', which pioneers the use of generative diffusion models for retrieving visible light reflectance at night, and 'Controllable Reference-Based Real-World Remote Sensing Image Super-Resolution with Generative Diffusion Priors', which proposes a novel controllable reference-based diffusion model for real-world remote sensing image super-resolution.

Sources

Lightning the Night with Generative Artificial Intelligence

Multimodal Atmospheric Super-Resolution With Deep Generative Models

Single Image Inpainting and Super-Resolution with Simultaneous Uncertainty Guarantees by Universal Reproducing Kernels

PixelBoost: Leveraging Brownian Motion for Realistic-Image Super-Resolution

Metadata, Wavelet, and Time Aware Diffusion Models for Satellite Image Super Resolution

System-Embedded Diffusion Bridge Models

Controllable Reference-Based Real-World Remote Sensing Image Super-Resolution with Generative Diffusion Priors

A Gift from the Integration of Discriminative and Diffusion-based Generative Learning: Boundary Refinement Remote Sensing Semantic Segmentation

Built with on top of