The field of image and 3D editing is witnessing significant advancements with the application of diffusion models. These models are being utilized to generate 3D hair geometry from a single image, perform motion estimation, and enable controllable image colorization. Furthermore, diffusion-based methods are being used for precise generative image manipulation, allowing for semantic and local editing with pixel precision.
Noteworthy papers in this area include DiffLocks, which enables detailed reconstruction of a wide variety of hairstyles directly from a single image. StableMotion leverages knowledge from pretrained large-scale image diffusion models to perform motion estimation, solving single-image-based image rectification tasks. IntrinsicEdit introduces a versatile, generative workflow that operates in an intrinsic-image latent space, enabling semantic, local manipulation with pixel precision. LightLab presents a simple, yet effective diffusion-based method for fine-grained, parametric control over light sources in an image. 3D-Fixup proposes a new framework for editing 2D images guided by learned 3D priors, supporting difficult editing situations such as object translation and 3D rotation.