Infrared and Visible Image Fusion

The field of image fusion is moving towards more sophisticated and innovative approaches to integrate complementary information from different modalities. Recent developments focus on addressing the challenges of handling degraded images and improving the fusion process. Notable advancements include the use of vision-language models, angle-based perception frameworks, and direction-aware gradient losses. These approaches aim to enhance the quality and accuracy of fused images, preserving texture intensity and correct edge orientation.

Noteworthy papers include: AngularFuse, which proposes an angle-based perception framework for spatial-sensitive image fusion, resulting in sharper and more detailed results. SWIR-LightFusion, which introduces a multimodal fusion framework integrating synthetic SWIR, LWIR, and RGB modalities, demonstrating improved fused-image quality and real-time performance.

Sources

Coupled Degradation Modeling and Fusion: A VLM-Guided Degradation-Coupled Network for Degradation-Aware Infrared and Visible Image Fusion

AngularFuse: A Closer Look at Angle-based Perception for Spatial-Sensitive Multi-Modality Image Fusion

Direction-aware multi-scale gradient loss for infrared and visible image fusion

SWIR-LightFusion: Multi-spectral Semantic Fusion of Synthetic SWIR with {Thermal} IR {(LWIR/MWIR)} and RGB

Built with on top of