Advancements in 3D Editing and Generation

The field of 3D editing and generation is rapidly advancing, with a focus on improving the quality and consistency of generated 3D assets. Recent developments have led to the creation of more sophisticated models that can manipulate 3D geometry and generate high-fidelity edits. Additionally, there is a growing interest in applying style transfer to 3D scenes, with methods that can effectively extract and transfer high-level style semantics from reference images. Another area of research is the development of more efficient and scalable methods for image-to-3D texture mapping, which can generate high-quality textures in a single forward pass. Noteworthy papers in this area include 3D-LATTE, which proposes a training-free editing method that operates within the latent space of a native 3D diffusion model, and SSGaussian, which introduces a novel 3D style transfer pipeline that effectively integrates prior knowledge from pretrained 2D diffusion models. Overall, these advancements have the potential to significantly improve the quality and versatility of 3D generated content.

Sources

3D-LATTE: Latent Space 3D Editing from Textual Instructions

MarkSplatter: Generalizable Watermarking for 3D Gaussian Splatting Model via Splatter Image Structure

Neural Scene Designer: Self-Styled Semantic Image Manipulation

Category-Aware 3D Object Composition with Disentangled Texture and Shape Multi-view Diffusion

MEPG:Multi-Expert Planning and Generation for Compositionally-Rich Image Generation

From Editor to Dense Geometry Estimator

SSGaussian: Semantic-Aware and Structure-Preserving 3D Style Transfer

A Scalable Attention-Based Approach for Image-to-3D Texture Mapping

Improved 3D Scene Stylization via Text-Guided Generative Image Editing with Region-Based Control

OmniStyle2: Scalable and High Quality Artistic Style Transfer Data Generation via Destylization

Built with on top of