The field of vision-language navigation is moving towards more integrated and robust architectures, addressing critical vulnerabilities such as poor spatial reasoning and memory overload. Recent developments focus on fusing multiple modules into synergistic architectures, enhancing long-range exploration and endpoint recognition. Notable advancements include the introduction of dynamic map memory modules, spatial reasoning modules, and decision modules that leverage large language models for path planning. These innovations have led to state-of-the-art performance in various benchmarks, demonstrating significant improvements in success rates and path lengths. Noteworthy papers include: MSNav, which proposes a framework that integrates three modules for robust vision-language navigation, achieving state-of-the-art performance on the Room-to-Room and REVERIE datasets. TinyGiantVLM, which presents a lightweight and modular two-stage framework for physical spatial reasoning, demonstrating strong performance in bridging visual perception and spatial understanding in industrial environments. Scene-Aware Vectorized Memory Multi-Agent Framework, which proposes a dual technological innovation framework for visually impaired assistance, reducing memory requirements while maintaining model performance and providing comprehensive real-time assistance in scene perception, text recognition, and navigation.