The field of face manipulation detection and open-world face recognition is rapidly evolving, with a focus on developing more robust and generalizable methods. Recent research has highlighted the importance of cross-domain generalization, where models can effectively detect manipulated faces across different domains and datasets. A key direction in this field is the development of multimodal learning frameworks that can integrate visual and textual information to enhance authenticity assessment. These frameworks have shown promising results in detecting face forgeries and deepfakes. Another area of research is the development of open-world face recognition systems that can recognize and differentiate among both familiar and unfamiliar faces. This requires the ability to discover new, previously unseen identities, and to capture subtle identity-specific cues. The reliability of vision-language models under adversarial frequency-domain perturbations is also being investigated, with findings suggesting that these models can be fragile and sensitive to frequency-based cues. Noteworthy papers in this area include: HAMLET-FFD, which proposes a hierarchical adaptive multi-modal learning framework for face forgery detection. FaceGCD, which introduces a novel open-world face recognition task and a method for dynamically constructing instance-specific feature extractors. On the Reliability of Vision-Language Models Under Adversarial Frequency-Domain Perturbations, which exposes a critical vulnerability of vision-language models when exposed to subtle, structured perturbations in the frequency domain. Visual Language Models as Zero-Shot Deepfake Detectors, which proposes a novel VLM-based approach to image classification and deepfake detection. Bi-Level Optimization for Self-Supervised AI-Generated Face Detection, which introduces a self-supervised method based on bi-level optimization for detecting AI-generated faces.