Advances in 3D Shape Generation and Scene Reconstruction

The field of 3D shape generation and scene reconstruction is rapidly advancing, with a focus on developing more efficient and effective methods for generating high-quality 3D models and scenes. Recent research has explored the use of hierarchical and multi-scale approaches to improve the accuracy and detail of 3D shape generation, as well as the integration of semantic information and geometric priors to enhance scene reconstruction. Notable papers in this area include HierOctFusion, which proposes a part-aware multi-scale octree diffusion model for generating fine-grained and sparse object structures, and TiP4GEN, which introduces a text-to-dynamic panorama scene generation framework for creating 360-degree immersive virtual environments. Other noteworthy papers include Next Visual Granularity Generation, which proposes a novel approach to image generation by decomposing an image into a structured sequence, and GOGS, which presents a two-stage framework for inverse rendering of glossy objects via Gaussian surfels.

Sources

HierOctFusion: Multi-scale Octree-based 3D Shape Generation via Part-Whole-Hierarchy Message Passing

TiP4GEN: Text to Immersive Panorama 4D Scene Generation

Next Visual Granularity Generation

PreSem-Surf: RGB-D Surface Reconstruction with Progressive Semantic Modeling and SG-MLP Pre-Rendering Mechanism

2D Gaussians Meet Visual Tokenizer

Is-NeRF: In-scattering Neural Radiance Field for Blurred Images

GeoSAM2: Unleashing the Power of SAM2 for 3D Part Segmentation

A Real-world Display Inverse Rendering Dataset

GOGS: High-Fidelity Geometry and Relighting for Glossy Objects via Gaussian Surfels

CM2LoD3: Reconstructing LoD3 Building Models Using Semantic Conflict Maps

Built with on top of