The field of vision language models is moving towards improving spatial reasoning capabilities, with a focus on object-centric spatial understanding and fine-grained perception. Recent studies have highlighted the gap between localization accuracy and true spatial understanding, emphasizing the need for spatially-aware foundation models. Researchers are exploring various approaches to address this gap, including the development of benchmarks and diagnostic tools to evaluate spatial reasoning capabilities. Notable papers in this area include: Spatial Reasoning in Foundation Models, which presents a systematic benchmark for object-centric spatial reasoning in foundation models. From Bias to Balance, which introduces a mechanism to mitigate spatial bias in large vision-language models. ColLab, which proposes a collaborative spatial progressive data engine for referring expression comprehension and generation. VisualOverload, which challenges models to perform simple, knowledge-free vision tasks in densely populated scenes. SpinBench, which evaluates spatial reasoning capabilities in vision language models through the task of perspective taking. LLM-RG, which combines off-the-shelf vision-language models with large language models for symbolic reasoning in outdoor driving scenes. Point-It-Out, which introduces a novel benchmark for embodied reasoning in vision language models. VLM-FO1, which bridges the gap between high-level reasoning and fine-grained perception in vision language models.