Advancements in Audio and Image Forgery Detection

The field of audio and image forgery detection is rapidly evolving, with a focus on developing more robust and generalizable methods. Recent research has highlighted the importance of considering frequency bias and spectral contrast in detecting deepfakes, as well as the need for more effective watermarking techniques to prevent the misuse of AI-generated audio.

Noteworthy papers in this area include DHAuDS, which introduces a dynamic and heterogeneous audio benchmark for test-time adaptation, and SONAR, which proposes a frequency-guided framework for generalizable deepfake detection. Frequency Bias Matters provides a fundamental explanation of generalization and robustness issues in deep image forgery detection from a frequency perspective. HarmonicAttack presents an adaptive cross-domain audio watermark removal method, and TAB-DRW introduces a DFT-based robust watermark for generative tabular data. These papers demonstrate significant strides toward improving the reliability and accessibility of forgery detection methods.

Sources

DHAuDS: A Dynamic and Heterogeneous Audio Benchmark for Test-Time Adaptation

When Generative Replay Meets Evolving Deepfakes: Domain-Aware Relative Weighting for Incremental Face Forgery Detection

SpectraNet: FFT-assisted Deep Learning Classifier for Deepfake Face Detection

Frequency Bias Matters: Diving into Robust and Generalized Deep Image Forgery Detection

Continual Audio Deepfake Detection via Universal Adversarial Perturbation

3-Tracer: A Tri-level Temporal-Aware Framework for Audio Forgery Detection and Localization

SONAR: Spectral-Contrastive Audio Residuals for Generalizable Deepfake Detection

Generalized Design Choices for Deepfake Detectors

HarmonicAttack: An Adaptive Cross-Domain Audio Watermark Removal

TAB-DRW: A DFT-based Robust Watermark for Generative Tabular Data

Built with on top of