Advances in AI Safety and Security

The field of AI research is moving towards addressing critical challenges in safety and security. Recent developments focus on improving the robustness of large language models, detecting data contamination, and enhancing privacy protections. Innovative methods, such as fine-grained iterative adversarial attacks and semantically-aware privacy agents, are being proposed to tackle these challenges. Noteworthy papers in this area include those that introduce novel frameworks for detecting data contamination, propose adaptive defense strategies against harmful fine-tuning, and develop test-time debiasing methods for vision-language models. These advancements have significant implications for the development of more secure and trustworthy AI systems.

Sources

Fine-Grained Iterative Adversarial Attacks with Limited Computation Budget

Semantically-Aware LLM Agent to Enhance Privacy in Conversational AI Services

Detecting Data Contamination in LLMs via In-Context Learning

Contrastive Knowledge Transfer and Robust Optimization for Secure Alignment of Large Language Models

Adaptive Defense against Harmful Fine-Tuning for Large Language Models via Bayesian Data Scheduler

Un-Attributability: Computing Novelty From Retrieval & Semantic Similarity

Visual Backdoor Attacks on MLLM Embodied Decision Making via Contrastive Trigger Learning

Best Practices for Biorisk Evaluations on Open-Weight Bio-Foundation Models

Probing Knowledge Holes in Unlearned LLMs

EL-MIA: Quantifying Membership Inference Risks of Sensitive Entities in LLMs

SegDebias: Test-Time Bias Mitigation for ViT-Based CLIP via Segmentation

Black-Box Membership Inference Attack for LVLMs via Prior Knowledge-Calibrated Memory Probing

Optimizing AI Agent Attacks With Synthetic Data

Contamination Detection for VLMs using Multi-Modal Semantic Perturbation

PrivacyCD: Hierarchical Unlearning for Protecting Student Privacy in Cognitive Diagnosis

PETRA: Pretrained Evolutionary Transformer for SARS-CoV-2 Mutation Prediction

REMIND: Input Loss Landscapes Reveal Residual Memorization in Post-Unlearning LLMs

Reusing Pre-Training Data at Test Time is a Compute Multiplier

Benchmark Designers Should "Train on the Test Set" to Expose Exploitable Non-Visual Shortcuts

Forgetting is Everywhere

Built with on top of