The field of panoramic image and video processing is rapidly advancing, with a focus on developing innovative solutions to tackle the challenges posed by spherical geometry and projection distortions. Researchers are exploring new approaches to editing, understanding, and analyzing panoramic content, including the use of adaptive reprojection, great-circle trajectory adjustment, and spherical search region tracking. The development of large-scale, labeled datasets, such as Leader360V, is also underway, which will enable more accurate and efficient training of models for 360 video segmentation and tracking. Furthermore, the application of multimodal large language models to dense understanding of omnidirectional panoramas is a promising area of research, with the introduction of datasets like Dense360 and benchmarks like Dense360-Bench. Noteworthy papers include SphereDrag, which proposes a novel panoramic editing framework, and Leader360V, which introduces a large-scale, labeled 360 video dataset. Additionally, Dense360 presents a comprehensive suite of reliability-scored annotations for omnidirectional panoramas, and Omnidirectional Video Super-Resolution using Deep Learning proposes a novel deep learning model for 360 video super-resolution.