Advancements in Adversarial Robustness and Image Protection

The field of computer vision is witnessing significant developments in enhancing the robustness of deep neural networks against adversarial attacks. Recent research has focused on leveraging contrastive learning, transformer-based denoising, and supervised contrastive learning with hard positive mining to improve the robustness of image classification models. These innovative approaches have shown promising results in defending against various types of adversarial perturbations. Furthermore, there is a growing interest in developing methods for protecting images against AI-powered editing, with techniques such as implanting irreversible adversarial noise and adaptive per-region Gaussian blur being explored. Noteworthy papers in this area include C-LEAD, which utilizes contrastive learning for adversarial defense, and BlurGuard, which proposes a simple approach for robustifying image protection against AI-powered editing. Additionally, Parameter Interpolation Adversarial Training and Mutual Information guided Visual Contrastive Learning have demonstrated impressive results in improving model robustness and representation learning. Overall, these advancements have the potential to significantly improve the security and reliability of deep learning systems in computer vision applications.

Sources

C-LEAD: Contrastive Learning for Enhanced Adversarial Defense

Trans-defense: Transformer-based Denoiser for Adversarial Defense with Spatial-Frequency Domain Representation

ANCHOR: Integrating Adversarial Training with Hard-mined Supervised Contrastive Learning for Robust Representation Learning

Mutual Information guided Visual Contrastive Learning

Enhancing rice leaf images: An overview of image denoising techniques

BlurGuard: A Simple Approach for Robustifying Image Protection Against AI-Powered Editing

Parameter Interpolation Adversarial Training for Robust Image Classification

T-MLA: A Targeted Multiscale Log--Exponential Attack Framework for Neural Image Compression

MiniFool - Physics-Constraint-Aware Minimizer-Based Adversarial Attacks in Deep Neural Networks

Protecting the Neural Networks against FGSM Attack Using Machine Unlearning

Probabilistic Robustness for Free? Revisiting Training via a Benchmark

Improving Unlearning with Model Updates Probably Aligned with Gradients

Built with on top of