Advances in Image and Video Processing

The field of image and video processing is rapidly evolving, with a focus on developing innovative methods for style transfer, relighting, and super-resolution. Researchers are exploring new approaches to disentangle style and content, allowing for more efficient and effective image manipulation. Additionally, there is a growing interest in using large vision-language models to generate high-quality images and videos with controlled attributes. The use of diffusion models and generative adversarial networks is also becoming increasingly popular for tasks such as image relighting and video super-resolution. Noteworthy papers in this area include CDST, which achieves state-of-the-art results in style transfer tasks, and DreamLight, which enables universal image relighting with aesthetic uniformity. Furthermore, StyleWallfacer introduces a groundbreaking unified training and inference framework for style transfer and text-driven stylization, while One-Step Diffusion and UniRelight propose novel approaches for video super-resolution and relighting.

Sources

CDST: Color Disentangled Style Transfer for Universal Style Reference Customization

Disentangling 3D from Large Vision-Language Models for Controlled Portrait Generation

DreamLight: Towards Harmonious and Consistent Image Relighting

Break Stylistic Sophon: Are We Really Meant to Confine the Imagination in Style Transfer?

One-Step Diffusion for Detail-Rich and Temporally Consistent Video Super-Resolution

UniRelight: Learning Joint Decomposition and Synthesis for Video Relighting

Built with on top of