The field of video generation and analysis is rapidly evolving, with a focus on improving the quality and coherence of generated videos. Recent developments have introduced new metrics for evaluating video generation models, such as the World Consistency Score, which assesses the internal consistency of generated videos. Other advances include controllable pedestrian video editing, video anomaly detection, and video forgery detection using optical flow residuals and spatial-temporal consistency. Noteworthy papers in this area include the World Consistency Score, which provides a comprehensive framework for evaluating video generation models, and the Controllable Pedestrian Video Editing framework, which enables flexible editing of pedestrian videos in multi-view driving scenarios. Additionally, the VideoGuard method protects video content from unauthorized editing by introducing subtle perturbations that interfere with generative diffusion models. The LayerT2V approach generates video by compositing background and foreground objects layer by layer, facilitating coherent multi-object synthesis. The SSTGNN framework detects AI-generated and manipulated videos using a lightweight Spatial-Spectral-Temporal Graph Neural Network architecture.