The field of 3D scene reconstruction and visualization is moving towards more efficient and accurate methods for capturing and rendering complex scenes. Researchers are exploring new techniques for guiding human camera operators to collect high-quality input images, such as using situated visualization and semantic segmentation. Additionally, there is a focus on developing frameworks that can reconstruct interactive 3D scenes from multi-scan fusion, enabling high-fidelity rendering and object-level scene manipulation. Online 3D Gaussian Splatting modeling is also being improved through adaptive view selection and multi-view stereo approaches. Furthermore, researchers are addressing the challenges of reliable multi-view 3D reconstruction in edge environments and developing low-latency 3D live remote visualization systems for wide-area scenes. Noteworthy papers include: IntelliCap, which proposes a novel situated visualization technique for scanning at multiple scales, and IGFuse, which reconstructs interactive Gaussian scenes by fusing observations from multiple scans. Online 3D Gaussian Splatting Modeling with Novel View Selection also presents a method for high-quality 3DGS modeling through adaptive view selection.