The field of 3D scene reconstruction and novel view synthesis is rapidly advancing, with a focus on improving the accuracy and consistency of reconstructed scenes. Recent works have emphasized the importance of incorporating geometric guidance, uncertainty estimation, and hybrid approaches combining 2D and 3D techniques. These innovations have led to significant improvements in the quality of reconstructed scenes, particularly in regions with sparse coverage or complex geometry. Notable papers in this area include: Perspective-aware 3D Gaussian Inpainting with Multi-view Consistency, which introduces a novel approach to 3D Gaussian inpainting that leverages perspective-aware content propagation and consistency verification. Ev4DGS: Novel-view Rendering of Non-Rigid Objects from Monocular Event Streams, which presents a method for novel view rendering of non-rigidly deforming objects from monocular event streams. G4Splat: Geometry-Guided Gaussian Splatting with Generative Prior, which proposes a geometry-guided Gaussian splatting approach that leverages accurate metric-scale depth maps to improve 3D scene reconstruction. Uncertainty Matters in Dynamic Gaussian Splatting for Monocular 4D Reconstruction, which introduces an uncertainty-aware dynamic Gaussian splatting framework that propagates reliable motion cues to enhance 4D reconstruction. GauSSmart: Enhanced 3D Reconstruction through 2D Foundation Models and Geometric Filtering, which presents a hybrid method that bridges 2D foundational models and 3D Gaussian splatting reconstruction. Inpainting the Red Planet: Diffusion Models for the Reconstruction of Martian Environments in Virtual Reality, which proposes a method for reconstructing the surface of Mars using an unconditional diffusion model.