The field of remote sensing and semantic segmentation is rapidly evolving, with a focus on developing innovative methods to improve the accuracy and robustness of land cover classification, deforestation detection, and forest structural complexity mapping. Researchers are exploring the fusion of different data sources, such as satellite imagery, lidar, and synthetic aperture radar, to overcome the limitations of individual data sources and improve the overall performance of semantic segmentation models. The use of deep learning techniques, such as convolutional neural networks and ensemble learning, is becoming increasingly popular in this field. Noteworthy papers in this area include: Scalable deep fusion of spaceborne lidar and synthetic aperture radar for global forest structural complexity mapping, which presents a scalable deep learning framework for mapping forest structural complexity. HARP-NeXt: High-Speed and Accurate Range-Point Fusion Network for 3D LiDAR Semantic Segmentation, which introduces a high-speed and accurate LiDAR semantic segmentation network that achieves a superior speed-accuracy trade-off compared to state-of-the-art methods.
Advances in Remote Sensing and Semantic Segmentation
Sources
Not every day is a sunny day: Synthetic cloud injection for deep land cover segmentation robustness evaluation across data sources
Scalable deep fusion of spaceborne lidar and synthetic aperture radar for global forest structural complexity mapping