The field of deepfake detection and defense is rapidly evolving, with a focus on developing robust and generalizable methods to counter the growing threat of AI-generated content. Recent research has emphasized the importance of adapting to unknown manipulation techniques and improving the persistence of active defense strategies. Innovations in this area include the use of parameter-efficient adaptations of pre-trained models, dual-function adversarial perturbations, and emulation of social network compression pipelines. Noteworthy papers in this area include: LNCLIP-DF, which achieves state-of-the-art performance in deepfake detection through a minimal adaptation of a pre-trained CLIP model. The Two-Stage Defense Framework, which combines interruption and poisoning to prevent attackers from retraining their models on protected images. The Dual-Path Guidance Network, which leverages unlabeled data to improve deepfake detection and achieves a significant performance boost over existing methods. The Few-shot Training-free Network, which enables real-world few-shot deepfake detection without requiring large-scale known data for training. The Forgery Guided Learning strategy, which allows detection networks to adapt to unknown forgery techniques and improves cross-domain detection performance.
Deepfake Detection and Defense
Sources
Boosting Active Defense Persistence: A Two-Stage Defense Framework Combining Interruption and Poisoning Against Deepfake
Bridging the Gap: A Framework for Real-World Video Deepfake Detection via Social Network Compression Emulation
When Deepfakes Look Real: Detecting AI-Generated Faces with Unlabeled Data due to Annotation Challenges
Leveraging Failed Samples: A Few-Shot and Training-Free Framework for Generalized Deepfake Detection