The field of 3D reconstruction is moving towards more efficient and accurate methods for novel view synthesis and dynamic scene representation. Researchers are exploring new approaches to improve the quality and fidelity of 3D models, including the use of Gaussian Splatting, Transformer-based architectures, and semantic-guided motion control. These innovations are enabling faster and more robust reconstruction of complex scenes, with applications in areas such as computer vision, robotics, and virtual reality. Notable papers in this area include: FSFSplatter, which introduces a new approach for fast surface reconstruction from free sparse images. From Tokens to Nodes, which proposes a motion-adaptive framework for dynamic 3D reconstruction. Optimized Minimal 4D Gaussian Splatting, which presents a framework for compact 4D scene representation. SegMASt3R, which leverages 3D foundation models for wide-baseline segment matching. SCas4D, which proposes a cascaded optimization framework for persistent 4D novel view synthesis. Generating Surface for Text-to-3D, which utilizes conditional text generation models and 2D Gaussian splatting for 3D content creation. Temporal-Prior-Guided View Planning, which proposes a method for periodic 3D plant reconstruction. MoRe, which proposes a training-free monocular geometry refinement method for cross-view consistency.