Advances in Video Generation and Analysis

The field of video generation and analysis is rapidly evolving, with a focus on improving the quality and coherence of generated videos. Recent developments have introduced new metrics for evaluating video generation models, such as the World Consistency Score, which assesses the internal consistency of generated videos. Other advances include controllable pedestrian video editing, video anomaly detection, and video forgery detection using optical flow residuals and spatial-temporal consistency. Noteworthy papers in this area include the World Consistency Score, which provides a comprehensive framework for evaluating video generation models, and the Controllable Pedestrian Video Editing framework, which enables flexible editing of pedestrian videos in multi-view driving scenarios. Additionally, the VideoGuard method protects video content from unauthorized editing by introducing subtle perturbations that interfere with generative diffusion models. The LayerT2V approach generates video by compositing background and foreground objects layer by layer, facilitating coherent multi-object synthesis. The SSTGNN framework detects AI-generated and manipulated videos using a lightweight Spatial-Spectral-Temporal Graph Neural Network architecture.

Sources

World Consistency Score: A Unified Metric for Video Generation Quality

Controllable Pedestrian Video Editing for Multi-View Driving Scenarios via Motion Sequence

GV-VAD : Exploring Video Generation for Weakly-Supervised Video Anomaly Detection

IN2OUT: Fine-Tuning Video Inpainting Model for Video Outpainting Using Hierarchical Discriminator

Video Forgery Detection with Optical Flow Residuals and Spatial-Temporal Consistency

Video Color Grading via Look-Up Table Generation

D3: Training-Free AI-Generated Video Detection Using Second-Order Features

Video Demoireing using Focused-Defocused Dual-Camera System

VideoGuard: Protecting Video Content from Unauthorized Editing

LayerT2V: Interactive Multi-Object Trajectory Layering for Video Generation

Circuit-Aware SAT Solving: Guiding CDCL via Conditional Probabilities

Multi-Stage Knowledge-Distilled VGAE and GAT for Robust Controller-Area-Network Intrusion Detection

Automatic Image Colorization with Convolutional Neural Networks and Generative Adversarial Networks

When Deepfake Detection Meets Graph Neural Network:a Unified and Lightweight Learning Framework

Tractable Sharpness-Aware Learning of Probabilistic Circuits

Built with on top of