Advances in Privacy and Security for Deep Learning

The field of deep learning is moving towards a greater emphasis on privacy and security, with a focus on developing methods to protect against various types of attacks and vulnerabilities. Researchers are exploring new techniques to defend against model inversion attacks, data reconstruction attacks, and feature inversion attacks, which can compromise the privacy of sensitive information. Additionally, there is a growing interest in developing more robust and secure deep learning models, such as those using robust watermarking and deep hashing methods. Notably, the development of diffusion-based image generation and editing techniques has posed new challenges for robust image watermarking, and researchers are working to address these challenges. Overall, the field is advancing towards a more secure and private deep learning paradigm. Noteworthy papers include: On the Information-Theoretic Fragility of Robust Watermarking under Diffusion Editing, which investigates the fragility of robust watermarking schemes under diffusion-based image editing. Model Inversion Attack Against Deep Hashing, which proposes a diffusion-based model inversion framework designed for deep hashing. InfoDecom, which proposes a defense framework that decomposes and removes redundant information to defend against privacy leakage in split inference. What Your Features Reveal, which introduces a black-box feature inversion attack framework that achieves high-fidelity image reconstruction from intermediate features.

Sources

On the Information-Theoretic Fragility of Robust Watermarking under Diffusion Editing

Codebook-Centric Deep Hashing: End-to-End Joint Learning of Semantic Hash Centers and Neural Hash Function

Model Inversion Attack Against Deep Hashing

InfoDecom: Decomposing Information for Defending against Privacy Leakage in Split Inference

What Your Features Reveal: Data-Efficient Black-Box Feature Inversion Attack for Split DNNs

Built with on top of