The field of animation and optical flow estimation is witnessing significant advancements, driven by the development of innovative models and datasets. Researchers are focusing on creating more realistic and coherent animations, with a emphasis on reference-guided video generation and multi-shot animation. The introduction of new benchmark suites and datasets is enabling more accurate evaluation and comparison of different models, leading to targeted improvements and increased performance. Noteworthy papers in this area include AnimeShooter, which presents a comprehensive multi-shot animation dataset, and Learning Optical Flow Field via Neural Ordinary Differential Equation, which introduces a novel approach for predicting optical flow using neural ordinary differential equations. Additionally, EDCFlow explores temporally dense difference maps for event-based optical flow estimation, achieving high-quality flow estimation at higher resolutions.