Efficient Video Generation and Editing

The field of video generation and editing is rapidly advancing, with a focus on improving efficiency and reducing computational costs. Recent developments have led to the creation of more effective and scalable models, such as those using diffusion transformers and sparse attention mechanisms. These models have achieved state-of-the-art performance in various tasks, including video generation, editing, and summarization. Notably, techniques like test-time training, domain adaptation, and dynamic sparsity have been employed to enhance model performance and efficiency. Furthermore, novel approaches like grafting and content-aware video generation have shown promise in exploring new architecture designs and improving training efficiency. Overall, the field is moving towards more efficient, flexible, and high-quality video generation and editing capabilities. Noteworthy papers include: Test-Time Training Done Right, which improves hardware utilization and state capacity; Interactive Video Generation via Domain Adaptation, which enhances perceptual quality and trajectory control; and Flexiffusion, which achieves efficient neural architecture search for diffusion models.

Sources

Test-Time Training Done Right

Interactive Video Generation via Domain Adaptation

MiniMax-Remover: Taming Bad Noise Helps Video Object Removal

Flexiffusion: Training-Free Segment-Wise Neural Architecture Search for Efficient Diffusion Models

Sparse-vDiT: Unleashing the Power of Sparse Attention to Accelerate Video Diffusion Transformers

TalkingMachines: Real-Time Audio-Driven FaceTime-Style Video via Autoregressive Diffusion Models

EdgeVidSum: Real-Time Personalized Video Summarization at the Edge

Chipmunk: Training-Free Acceleration of Diffusion Transformers with Dynamic Column-Sparse Deltas

FullDiT2: Efficient In-Context Conditioning for Video Diffusion Transformers

UNIC: Unified In-Context Video Editing

FPSAttention: Training-Aware FP8 and Sparsity Co-Design for Fast Video Diffusion

FEAT: Full-Dimensional Efficient Attention Transformer for Medical Video Generation

FlowDirector: Training-Free Flow Steering for Precise Text-to-Video Editing

Astraea: A GPU-Oriented Token-wise Acceleration Framework for Video Diffusion Transformers

Exploring Diffusion Transformer Designs via Grafting

ContentV: Efficient Training of Video Generation Models with Limited Compute

Built with on top of