Advances in Adversarial Robustness and Defense

The field of machine learning is moving towards developing more robust and secure models, with a focus on adversarial robustness and defense. Recent papers have introduced new methods for defending against adversarial attacks, such as DRIFT, which uses a stochastic ensemble of lightweight filters to disrupt gradient consensus, and MANI-Pure, which uses magnitude-adaptive noise injection to suppress adversarial perturbations. Other papers have explored the use of diffusion models for secure and reversible face anonymization, and the development of new attack methods, such as DIA, which targets the integrated DDIM trajectory path. Noteworthy papers include DRIFT, which achieves substantial robustness gains on ImageNet, and VAGUEGAN, which introduces a stealthy poisoning and backdoor attack pipeline on image generative pipelines. Overall, the field is moving towards developing more robust and secure models, with a focus on adversarial robustness and defense.

Sources

Scalable Second-order Riemannian Optimization for $K$-means Clustering

Decoding Deception: Understanding Automatic Speech Recognition Vulnerabilities in Evasion and Poisoning Attacks

Pushing Toward the Simplex Vertices: A Simple Remedy for Code Collapse in Smoothed Vector Quantization

DRIFT: Divergent Response in Filtered Transformations for Robust Adversarial Defense

VAGUEGAN: Stealthy Poisoning and Backdoor Attacks on Image Generative Pipelines

Score-based Membership Inference on Diffusion Models

MANI-Pure: Magnitude-Adaptive Noise Injection for Adversarial Purification

Field Calibration of Hyperspectral Cameras for Terrain Inference

OmniDFA: A Unified Framework for Open Set Synthesis Image Detection and Few-Shot Attribution

PUREVQ-GAN: Defending Data Poisoning Attacks through Vector-Quantized Bottlenecks

Reconcile Certified Robustness and Accuracy for DNN-based Smoothed Majority Vote Classifier

Indirect Attention: Turning Context Misalignment into a Feature

Nearest matrix with multiple eigenvalues by Riemannian optimization

DiVeQ: Differentiable Vector Quantization Using the Reparameterization Trick

Stealing AI Model Weights Through Covert Communication Channels

CODED-SMOOTHING: Coding Theory Helps Generalization

Understanding Sensitivity of Differential Attention through the Lens of Adversarial Robustness

DIA: The Adversarial Exposure of Deterministic Inversion in Diffusion Models

Towards Adversarial Training under Hyperspectral Images

Secure and reversible face anonymization with diffusion models

HVAC-EAR: Eavesdropping Human Speech Using HVAC Systems

EditTrack: Detecting and Attributing AI-assisted Image Editing

Purrception: Variational Flow Matching for Vector-Quantized Image Generation

Towards Imperceptible Adversarial Defense: A Gradient-Driven Shield against Facial Manipulations

Mirage Fools the Ear, Mute Hides the Truth: Precise Targeted Adversarial Attacks on Polyphonic Sound Event Detection Systems

Built with on top of