The field of generative models and image watermarking is rapidly evolving, with a focus on improving the security and authenticity of AI-generated content. Researchers are exploring new methods for watermarking images, including techniques that utilize diffusion models and autoregressive models. These methods aim to create robust watermarks that can withstand various types of attacks, such as Deepfake manipulations. Additionally, there is a growing interest in developing frameworks for analyzing the fingerprints of generative models, which can help identify the source of generated content. Furthermore, researchers are also working on improving the performance of image generation models, including the development of new architectures and training methods. Noteworthy papers in this area include:
- The paper proposing a unified framework for stealthy adversarial generation via latent optimization and transferability enhancement, which won first place in a competition.
- The paper introducing PECCAVI, a visual paraphrase attack-safe and distortion-free image watermarking technique.
- The paper proposing a Tamper-Aware Generative image WaterMarking method, which achieves state-of-the-art tampering robustness and localization capability.