The field of dense video captioning and summarization is rapidly evolving, with a focus on developing innovative methods to improve the accuracy and efficiency of video analysis. Recent developments have seen a shift towards incorporating explicit position and relation priors, perceptual recognition, and graph-based sentence summarization to enhance the quality of video captions. Additionally, self-supervised video summarization frameworks are being explored to reduce the reliance on supervised annotations and improve cross-domain applicability. These advancements have the potential to significantly impact applications such as content moderation, video search, and human motion analysis. Noteworthy papers include: PR-DETR, which proposes a novel dense video captioning framework that injects position and relation priors into the detection transformer to improve localization accuracy and caption quality. PRISM, which introduces a lightweight and perceptually-aligned framework for keyframe extraction that operates in the CIELAB color space and uses perceptual color difference metrics to identify standout moments in video content. TRIM, which presents a pioneering self-supervised video summarization model that captures spatial and temporal dependencies without the overhead of attention, RNNs, or transformers.