The field of face recognition and deepfake detection is rapidly evolving, with a focus on improving the accuracy and robustness of models. Recent developments have highlighted the importance of considering recognizability, a key perceptual factor in human face processing, to enhance feature representation and improve the performance of face recognition systems. Additionally, there is a growing need for explainable and transparent models, particularly in the context of deepfake detection, where the ability to provide verifiable reasoning explanations is crucial. Researchers are also exploring new approaches to detect and mitigate adversarial attacks, including the use of multimodal large language models and ensemble-based methods. Noteworthy papers in this area include QCFace, which introduces a hard margin strategy to improve face recognition, and EDVD-LLaMA, which proposes an explainable deepfake video detection framework. Furthermore, papers like Latent Feature Alignment and Fake-in-Facext have made significant contributions to the discovery of biased subpopulations in face recognition models and fine-grained explainable deepfake analysis, respectively.