The field of video motion generation and editing is rapidly advancing, with a focus on developing more efficient and effective methods for generating and editing video content. One of the key areas of research is the use of diffusion models to generate and edit video, with a particular emphasis on preserving the continuity of motion dynamics and ensuring semantic consistency. Another area of focus is the development of novel representations and pipelines for video editing, such as the use of pose and position priors to enable flexible and structure-preserving editing. Notable papers in this area include MoMaps, which proposes a novel pixel-aligned motion map representation for 3D scene motion generation, and Edit-Your-Interest, which introduces a lightweight and text-driven video editing method that achieves high efficiency and visual fidelity. Overall, the field is moving towards more advanced and sophisticated methods for generating and editing video content, with a focus on preserving semantic consistency and continuity of motion dynamics.