Advances in Computer Vision for Virtual Try-On and Garment Manipulation

The field of computer vision is moving towards more realistic and interactive virtual try-on and garment manipulation experiences. Researchers are developing novel methods for reconstructing human avatars, manipulating clothing, and estimating the shape and appearance of fabrics. These advancements have the potential to revolutionize the fashion and retail industries. Noteworthy papers include: DAOVI, which proposes a novel deep learning model for omnidirectional video inpainting, and LUIVITON, which presents an end-to-end system for fully automated virtual try-on. SAFT is also notable for its approach to reconstructing the shape and appearance of fabrics from monocular video.

Sources

DAOVI: Distortion-Aware Omnidirectional Video Inpainting

DevilSight: Augmenting Monocular Human Avatar Reconstruction through a Virtual Perspective

Reactive In-Air Clothing Manipulation with Confidence-Aware Dense Correspondence and Visuotactile Affordance

LUIVITON: Learned Universal Interoperable VIrtual Try-ON

SAFT: Shape and Appearance of Fabrics from Template via Differentiable Physical Simulations from Monocular Video

Built with on top of