The field of video editing and generation is currently moving towards more efficient and controllable methods. Researchers are exploring ways to improve the quality and coherence of generated videos, as well as developing new techniques for editing and manipulating video content. One area of focus is on adapting image diffusion models to video, which has shown promising results in generating high-quality videos. Another area of research is on fine-tuning video diffusion models to generate videos that reflect specific attributes of training data. Notably, some papers have introduced innovative methods such as frequency-aware factorization, adaptive low-pass guidance, and cross-frame representation alignment to improve video editing and generation capabilities. Additionally, there is a growing interest in developing methods for 3D asset editing and animated storytelling. Some papers that are particularly noteworthy in this regard include: FADE, which introduces a training-free video editing approach that leverages pre-trained video diffusion models via frequency-aware factorization. Enhancing Motion Dynamics of Image-to-Video Models via Adaptive Low-Pass Guidance, which proposes a simple yet effective method to improve the motion dynamics of generated videos. Cross-Frame Representation Alignment for Fine-Tuning Video Diffusion Models, which introduces a novel regularization technique to align hidden states of a frame with external features from neighboring frames. LoRA-Edit, which proposes a mask-based LoRA tuning method that adapts pretrained Image-to-Video models for flexible video editing. Edit360, which enables user-specific editing from arbitrary viewpoints while ensuring structural coherence across all views. AniMaker, which introduces a multi-agent framework for automated multi-agent animated storytelling. VINCIE, which explores whether an in-context image editing model can be learned directly from videos.