The field of image analysis is witnessing a significant shift towards leveraging Vision Foundation Models (VFMs) for various applications. Researchers are exploring the potential of VFMs in remote sensing change detection, text manipulation detection, and synthetic image detection. The use of VFMs is showing promising results, with improved accuracy and efficiency in these tasks. Notably, the ability of VFMs to generalize across different datasets and domains is a major advantage. Furthermore, the integration of parameter-efficient fine-tuning methods and prompt-guided knowledge injection is enhancing the performance of VFMs. Overall, the field is moving towards developing more robust and scalable VFM-based frameworks for image analysis. Noteworthy papers include: PeftCD, which proposes a change detection framework built upon VFMs with Parameter-Efficient Fine-Tuning, achieving state-of-the-art performance across multiple public datasets. Brought a Gun to a Knife Fight, which demonstrates the effectiveness of modern VFMs in detecting AI-generated images, outperforming specialized detectors. DF-LLaVA, which unlocks the potential of MLLMs for synthetic image detection via prompt-guided knowledge injection, achieving outstanding detection accuracy and explainability.