Adversarial Attacks and Defenses in Deep Learning

The field of deep learning is witnessing significant advancements in adversarial attacks and defenses. Researchers are exploring new threat models, attack strategies, and defense mechanisms to enhance the security and robustness of deep learning systems. A key direction in this area is the development of more sophisticated attack methods, such as data reconstruction attacks, backdoor attacks, and adversarial patch attacks, which can compromise the integrity of deep learning models. On the other hand, researchers are also proposing innovative defense strategies, including diffusion denoised smoothing, adversarial training, and purification of adversarial patches, to counter these threats. Noteworthy papers in this area include BadSR, which improves the stealthiness of poisoned HR images in backdoor attacks, and SuperPure, which proposes a pixel-wise masking scheme to purify images from adversarial patches. Overall, the field is moving towards more robust and secure deep learning systems, and ongoing research is crucial to stay ahead of emerging threats.

Sources

Vulnerability of Transfer-Learned Neural Networks to Data Reconstruction Attacks in Small-Data Regime

BadSR: Stealthy Label Backdoor Attacks on Image Super-Resolution

My Face Is Mine, Not Yours: Facial Protection Against Diffusion Model Face Swapping

Beyond Classification: Evaluating Diffusion Denoised Smoothing for Security-Utility Trade off

Challenger: Affordable Adversarial Driving Video Generation

BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World

TRAIL: Transferable Robust Adversarial Images via Latent diffusion

Accelerating Targeted Hard-Label Adversarial Attacks in Low-Query Black-Box Settings

SuperPure: Efficient Purification of Localized and Distributed Adversarial Patches via Super-Resolution GAN Models

AdvReal: Adversarial Patch Generation Framework with Application to Adversarial Safety Evaluation of Object Detection Systems

Built with on top of