The field of multimodal large language models (MLLMs) is rapidly advancing, with a focus on improving video understanding capabilities. Recent developments have centered around enhancing the efficiency and effectiveness of MLLMs in handling long video inputs, with innovations in areas such as task-aware key-value sparsification, parallel encoding strategies, and universal feature coding. These advancements have led to significant improvements in model performance, compression efficiency, and cross-model generalization. Notably, some papers have made particularly noteworthy contributions, including: Video-XL-2, which achieves state-of-the-art performance on long video understanding benchmarks while demonstrating exceptional efficiency. IPFormer-VideoLLM, which introduces a new dataset and model that significantly enhance multi-shot video understanding capabilities. DT-UFC, which proposes a universal feature coding approach that effectively aligns heterogeneous feature distributions from different models. PEVLM, which presents a parallel encoding strategy that reduces attention computation and improves accuracy in long video understanding tasks.