Deepfake Detection and Robustness Advances

The field of deepfake detection is moving towards more integrated and robust approaches, with a focus on effectively fusing spatial and temporal features to identify subtle and time-dependent manipulations. Researchers are exploring innovative methods, such as cross-attention mechanisms and language-driven segmental manipulation, to improve the accuracy and reliability of deepfake detection models. Noteworthy papers in this area include CAST, which leverages cross-attention to fuse spatial and temporal features, and PhonemeFake, which introduces a language-driven approach to manipulate critical speech segments. Other notable papers, such as CDAL and DDL, are making significant contributions to open-world model attribution and interpretable deepfake detection, respectively.

Sources

CAST: Cross-Attentive Spatio-Temporal feature fusion for Deepfake detection

PhonemeFake: Redefining Deepfake Realism with Language-Driven Segmental Manipulation and Adaptive Bilevel Detection

Learning Counterfactually Decoupled Attention for Open-World Model Attribution

DDL: A Dataset for Interpretable Deepfake Detection and Localization in Real-World Scenarios

Holmes: Towards Effective and Harmless Model Ownership Verification to Personalized Large Vision Models via Decoupling Common Features

TAGF: Time-aware Gated Fusion for Multimodal Valence-Arousal Estimation

De-AntiFake: Rethinking the Protective Perturbations Against Voice Cloning Attacks

Fair Deepfake Detectors Can Generalize

Built with on top of