Deepfake Detection and Image Manipulation

The field of deepfake detection and image manipulation is rapidly advancing, with a focus on developing more effective and robust methods for detecting and attributing manipulated media. Recent research has highlighted the limitations of existing detection methods, which often struggle to generalize across different generative domains and manipulation techniques. In response, researchers are exploring new approaches that integrate multiple modalities, such as visual, textual, and frequency-domain features, to improve detection accuracy and robustness. Additionally, there is a growing interest in developing attribution models that can identify the specific manipulation method used, which could enhance trustworthiness and explainability for end users. The development of large-scale datasets, such as those focused on remote sensing images, is also underway to support the evaluation and development of next-generation forgery detection approaches. Noteworthy papers include: CAMME, which proposes a framework that dynamically integrates visual, textual, and frequency-domain features through a multi-head cross-attention mechanism. RSFAKE-1M, which introduces a large-scale dataset for detecting diffusion-generated remote sensing forgeries. Weakly-supervised Localization of Manipulated Image Regions, which proposes a novel approach that integrates activation maps with segmentation maps for coarse localization of manipulated regions.

Sources

CAMME: Adaptive Deepfake Image Detection with Multi-Modal Cross-Attention

Do DeepFake Attribution Models Generalize?

RSFAKE-1M: A Large-Scale Dataset for Detecting Diffusion-Generated Remote Sensing Forgeries

Weakly-supervised Localization of Manipulated Image Regions Using Multi-resolution Learned Features

Built with on top of