The field of immersive technologies and 3D scene understanding is rapidly evolving, with a focus on achieving high-quality, interactive, and photorealistic experiences. Recent developments have centered around improving the efficiency and expressiveness of 3D scene representation techniques, such as Neural Radiance Fields (NeRF) and Gaussian Splatting (GS). These advancements have enabled progress in various domains, including scene reconstruction, robotics, and interactive content creation. Furthermore, the integration of language embeddings and Large Language Models (LLMs) into Gaussian Splatting pipelines has opened up new possibilities for text-conditioned generation, editing, and semantic scene understanding. Additionally, researchers are exploring ways to bring physically controllable relighting to in-the-wild images, combining the physical accuracy of traditional rendering with the photorealistic appearance of neural rendering. Noteworthy papers include:
- GENIE, which introduces a hybrid model combining the strengths of implicit and explicit representations for real-time, locality-aware editing.
- Physically Controllable Relighting of Photographs, which presents a self-supervised approach to in-the-wild image relighting that enables fully controllable, physically based illumination editing.