The field of computer vision is witnessing significant advancements in low-light image enhancement and remote sensing image segmentation. Researchers are exploring innovative approaches to improve the quality of images captured in low-light conditions, leveraging techniques such as deep semantic prior guidance, multimodal learning, and fusion frameworks. Meanwhile, remote sensing image segmentation is being enhanced through data augmentation strategies, multimodal fusion, and category-specific fusion architectures. Notably, the incorporation of semantic knowledge and text-level semantic prior guidance is demonstrating superior performance in low-light image enhancement. In remote sensing image segmentation, addressing the long-tail problem through data augmentation and multimodal fusion is yielding state-of-the-art results. Noteworthy papers include: DeepSPG, which proposes a novel framework for low-light image enhancement using deep semantic prior guidance and multimodal learning. FusionNet introduces a multi-model linear fusion framework for low-light image enhancement, achieving impressive results in benchmark datasets. SRMF presents a data augmentation and multimodal fusion approach for long-tail UHR satellite image segmentation, demonstrating state-of-the-art performance.