The field of semantic segmentation and remote sensing is rapidly evolving, with a focus on improving accuracy, robustness, and efficiency. Recent developments have seen the integration of natural language guidance, self-supervised learning, and multi-scale feature fusion to enhance segmentation models. Notably, the use of vision foundation models has shown significant promise in semi-supervised learning, allowing for more effective utilization of unlabeled data. Furthermore, novel evaluation metrics such as the Perception Characteristics Distance have been proposed to assess the reliability and robustness of perception systems. Some noteworthy papers in this area include Talk2SAM, which achieves state-of-the-art performance in segmenting complex-shaped objects using textual guidance, and RS-MTDF, which leverages multi-teacher distillation and fusion for remote sensing semi-supervised semantic segmentation. Overall, these advancements have the potential to drive significant improvements in applications such as medical image segmentation, autonomous driving, and environmental monitoring.
Advances in Semantic Segmentation and Remote Sensing
Sources
Using Satellite Images And Self-supervised Machine Learning Networks To Detect Water Hidden Under Vegetation
RS-MTDF: Multi-Teacher Distillation and Fusion for Remote Sensing Semi-Supervised Semantic Segmentation
Perception Characteristics Distance: Measuring Stability and Robustness of Perception System in Dynamic Conditions under a Certain Decision Rule
MSSDF: Modality-Shared Self-supervised Distillation for High-Resolution Multi-modal Remote Sensing Image Learning
SRPL-SFDA: SAM-Guided Reliable Pseudo-Labels for Source-Free Domain Adaptation in Medical Image Segmentation
Urban1960SatSeg: Unsupervised Semantic Segmentation of Mid-20$^{th}$ century Urban Landscapes with Satellite Imageries
Machine Learning-Based Classification of Oils Using Dielectric Properties and Microwave Resonant Sensing