Advances in Multimodal Image Generation and Editing

The field of multimodal image generation and editing is rapidly evolving, with a focus on developing more sophisticated and controllable models. Recent research has explored the use of diffusion models, large language models, and vision-language models to improve the quality and diversity of generated images. Noteworthy papers in this area include FlexMUSE, which proposes a multimodal unification and semantics enhancement framework for creative writing, and JCo-MVTON, which introduces a jointly controllable multi-modal diffusion transformer for mask-free virtual try-on. Another significant contribution is the Instant Preference Alignment framework, which enables instant preference-aligned text-to-image generation in a real-time and training-free manner. Additionally, the All-in-One Slider module has been proposed for attribute manipulation in diffusion models, allowing for fine-grained control over various attributes. These advancements have the potential to revolutionize applications such as virtual try-on, image editing, and content creation.

Sources

FlexMUSE: Multimodal Unification and Semantics Enhancement Framework with Flexible interaction for Creative Writing

Styleclone: Face Stylization with Diffusion Based Data Augmentation

Bias Amplification in Stable Diffusion's Representation of Stigma Through Skin Tones and Their Homogeneity

An LLM-LVLM Driven Agent for Iterative and Fine-Grained Image Editing

JCo-MVTON: Jointly Controllable Multi-Modal Diffusion Transformer for Mask-Free Virtual Try-on

Instant Preference Alignment for Text-to-Image Diffusion Models

CEIDM: A Controlled Entity and Interaction Diffusion Model for Enhanced Text-to-Image Generation

Enhancing Reference-based Sketch Colorization via Separating Reference Representations

FCR: Investigating Generative AI models for Forensic Craniofacial Reconstruction

SpotEdit: Evaluating Visually-Guided Image Editing Methods

All-in-One Slider for Attribute Manipulation in Diffusion Models

Fast Texture Transfer for XR Avatars via Barycentric UV Conversion

Interact-Custom: Customized Human Object Interaction Image Generation

Controllable Skin Synthesis via Lesion-Focused Vector Autoregression Model

Not Every Gift Comes in Gold Paper or with a Red Ribbon: Exploring Color Perception in Text-to-Image Models

Audio-Guided Visual Editing with Complex Multi-Modal Prompts

Describe, Don't Dictate: Semantic Image Editing with Natural Language Intent

FastFit: Accelerating Multi-Reference Virtual Try-On via Cacheable Diffusion Models

ArtFace: Towards Historical Portrait Face Identification via Model Adaptation

CraftGraffiti: Exploring Human Identity with Custom Graffiti Art via Facial-Preserving Diffusion Models

Dress&Dance: Dress up and Dance as You Like It - Technical Preview

Built with on top of