The field of dynamic scene reconstruction and 4D spatial intelligence is rapidly advancing, driven by innovations in 3D representations, deep learning architectures, and real-time rendering techniques. A key direction in this field is the development of methods that can reconstruct and render dynamic scenes with high fidelity and realism, enabling applications such as virtual and augmented reality, 3D videoconferencing, and embodied AI. Recent work has focused on addressing the challenges of dynamic scene reconstruction, including the handling of complex motion, significant scale variations, and sparse-view captures. Notable papers in this area include DASH, which presents a real-time dynamic scene rendering framework that employs 4D hash encoding coupled with self-supervised decomposition, and VoluMe, which introduces a method to predict 3D Gaussian reconstructions in real time from a single 2D webcam feed. Additionally, MonoFusion proposes a sparse-view 4D reconstruction method via monocular fusion, and Enhanced Velocity Field Modeling for Gaussian Video Reconstruction presents a flow-empowered velocity field modeling scheme tailored for Gaussian video reconstruction.