The field of adversarial attacks and face recognition is rapidly evolving, with a focus on improving the transferability of adversarial examples and developing more explainable face recognition systems. Researchers are exploring new methods to enhance the attack performance of adversarial examples, including the use of ensemble attacks and attention mechanisms. Furthermore, there is a growing interest in developing explainable face recognition systems that can provide visual explanations for their decisions, which can increase trust and transparency in these systems.
Notable papers in this area include: Harmonizing Intra-coherence and Inter-divergence in Ensemble Attacks for Adversarial Transferability, which introduces a novel method for ensemble attacks that significantly improves the transferability of adversarial examples. Attention-aggregated Attack for Boosting the Transferability of Facial Adversarial Examples, which proposes a new attack method that enhances the transferability of adversarial examples against face recognition models. Explainable Face Recognition via Improved Localization, which presents a method for explainable face recognition systems that provides visual explanations for their decisions. Reliable Disentanglement Multi-view Learning Against View Adversarial Attacks, which proposes a novel framework for multi-view learning that can defend against adversarial attacks. MTL-UE: Learning to Learn Nothing for Multi-Task Learning, which presents a unified framework for generating unlearnable examples for multi-task data and models.