Advancements in 4D Scene Generation and Control

The field of 4D scene generation and control is witnessing significant advancements, with a focus on achieving high-fidelity, temporally consistent, and spatially coherent results. Researchers are exploring innovative approaches to decouple scene dynamics from camera pose, enabling precise control over both aspects. Furthermore, there is a growing emphasis on jointly controlling camera trajectory and illumination, recognizing that visual dynamics are inherently shaped by both geometry and lighting.

Notable papers in this area include: Relightable Holoported Characters, which presents a novel method for free-view rendering and relighting of dynamic humans from sparse-view RGB videos. Dynamic-eDiTor, which introduces a training-free text-driven 4D editing framework leveraging Multimodal Diffusion Transformer and 4D Gaussian Splatting. Other noteworthy papers, such as PanFlow, Dual-Projection Fusion, and ChronosObserver, also demonstrate innovative solutions for panoramic video generation, upright panorama generation, and 4D world scene representation.

Sources

Relightable Holoported Characters: Capturing and Relighting Dynamic Human Performance from Sparse Views

Dynamic-eDiTor: Training-Free Text-Driven 4D Scene Editing with Multimodal Diffusion Transformer

PanFlow: Decoupled Motion Control for Panoramic Video Generation

Dual-Projection Fusion for Accurate Upright Panorama Generation in Robotic Vision

ChronosObserver: Taming 4D World with Hyperspace Diffusion Sampling

Generative Video Motion Editing with 3D Point Tracks

SeeU: Seeing the Unseen World via 4D Dynamics-aware Generation

ReCamDriving: LiDAR-Free Camera-Controlled Novel Trajectory Video Generation

Joint 3D Geometry Reconstruction and Motion Generation for 4D Synthesis from a Single Image

BulletTime: Decoupled Control of Time and Camera Pose for Video Generation

Light-X: Generative 4D Video Rendering with Camera and Illumination Control

Built with on top of