The field of dynamic view synthesis and reconstruction is rapidly advancing with a focus on developing efficient and effective frameworks for generating novel views from sparse inputs. Recent developments have seen a shift towards leveraging powerful generative models, such as diffusion models, to address the challenges of reconstructing dynamic scenes from limited viewpoints. These approaches have enabled large-scale training on diverse datasets, resulting in significant improvements in performance and efficiency. Noteworthy papers in this area include MoVieS, which achieves unified modeling of appearance, geometry, and motion, and SmokeSVD, which efficiently reconstructs dynamic smoke from a single video. Additionally, Diffuman4D has demonstrated high-fidelity view synthesis of humans from sparse-view videos, outperforming existing approaches. These innovative methods are expected to have a significant impact on various applications, including scene flow estimation, moving object segmentation, and 3D point tracking.