The field of spatial reasoning and 3D vision-language understanding is rapidly advancing, with a focus on developing models that can accurately infer and manipulate spatial and geometric properties in complex scenes. Recent research has emphasized the importance of spatial awareness in embodied AI and robotic systems, and has introduced novel benchmarks and datasets to evaluate the capabilities of large language models (LLMs) in this domain. Notable papers have proposed innovative approaches to spatial reasoning, such as the use of denoising diffusion models and sparse coefficient fields to improve the efficiency and accuracy of 3D language fields. Other works have introduced new benchmarks and datasets, including LangNavBench, SpatialViz-Bench, PlanQA, SURPRISE3D, and OST-Bench, which provide a more comprehensive evaluation of LLMs' spatial reasoning capabilities. Some papers that are particularly noteworthy include: SPADE, which proposes a novel approach for open-vocabulary panoptic scene graph generation that outperforms state-of-the-art methods. LangSplatV2, which achieves high-dimensional feature splatting and 3D open-vocabulary text querying at high speeds, providing a 42x speedup and a 47x boost over previous methods.