The field of audio and image forgery detection is rapidly evolving, with a focus on developing more robust and generalizable methods. Recent research has highlighted the importance of considering frequency bias and spectral contrast in detecting deepfakes, as well as the need for more effective watermarking techniques to prevent the misuse of AI-generated audio.
Noteworthy papers in this area include DHAuDS, which introduces a dynamic and heterogeneous audio benchmark for test-time adaptation, and SONAR, which proposes a frequency-guided framework for generalizable deepfake detection. Frequency Bias Matters provides a fundamental explanation of generalization and robustness issues in deep image forgery detection from a frequency perspective. HarmonicAttack presents an adaptive cross-domain audio watermark removal method, and TAB-DRW introduces a DFT-based robust watermark for generative tabular data. These papers demonstrate significant strides toward improving the reliability and accessibility of forgery detection methods.