The field of low-light image processing and multi-modal learning is experiencing significant growth, with a focus on developing innovative methods to enhance image quality and improve downstream vision tasks. Recent research has explored the application of deep learning techniques to low-light image enhancement, demonstrating promising results. Additionally, there is a growing interest in multi-modal learning, particularly in the context of visible and infrared image fusion, where researchers are working to design more concise and efficient structures to integrate semantic information into fusion models. Noteworthy papers in this area include:
- Dual-level Fuzzy Learning with Patch Guidance for Image Ordinal Regression, which proposes a novel framework for image ordinal regression that learns precise feature-based grading boundaries from ambiguous ordinal labels.
- UnfoldIR: Rethinking Deep Unfolding Network in Illumination Degradation Image Restoration, which introduces a new IDIR model with dedicated regularization terms for smoothing illumination and enhancing texture.
- Boosting Cross-spectral Unsupervised Domain Adaptation for Thermal Semantic Segmentation, which presents a comprehensive study on cross-spectral UDA for thermal image semantic segmentation and proposes a novel masked mutual learning strategy to promote complementary information exchange between spectral models.