The field of multimodal large language models (MLLMs) is moving towards enhancing spatial reasoning capabilities, enabling models to better understand and interpret 3D structures. This development is crucial for applications such as embodied AI, robotics, and 3D scene-language understanding. Researchers are exploring innovative approaches to improve spatial reasoning, including implicit spatial world modeling, structure-enhanced modules, and unified models that integrate perception and reasoning. These advancements have significant implications for the field, allowing MLLMs to develop a more holistic understanding of 3D space and improving their performance in various tasks. Noteworthy papers include: S^2-MLLM, which proposes an efficient framework that enhances spatial reasoning in MLLMs through implicit spatial reasoning, and MILO, which introduces a paradigm that simulates human-like spatial imagination. Additionally, COOPER, a unified model for cooperative perception and reasoning, achieves significant improvements in spatial reasoning while maintaining general performance.