The field of vision-language models (VLMs) is moving towards improved spatial reasoning capabilities, with a focus on developing more comprehensive and challenging benchmarks to evaluate these abilities. Recent research has highlighted the limitations of current VLMs in constructing and maintaining 3D scene representations over time from visual signals, as well as their struggles with specific 3D details like object placement, relationships, and measurements. Furthermore, there is a growing interest in developing VLMs that can effectively parse and understand navigational signs, extract navigational cues, and perform spatial reasoning tasks that require multi-step visual simulations. Noteworthy papers in this area include: OmniSpatial, which introduces a comprehensive benchmark for spatial reasoning grounded in cognitive psychology, covering four major categories and 50 fine-grained subcategories. GenSpace, which presents a novel benchmark and evaluation pipeline to assess the spatial awareness of current image generation models, highlighting three core limitations in their spatial perception.