The field of computer vision is witnessing significant developments in enhancing the robustness of deep neural networks against adversarial attacks. Recent research has focused on leveraging contrastive learning, transformer-based denoising, and supervised contrastive learning with hard positive mining to improve the robustness of image classification models. These innovative approaches have shown promising results in defending against various types of adversarial perturbations. Furthermore, there is a growing interest in developing methods for protecting images against AI-powered editing, with techniques such as implanting irreversible adversarial noise and adaptive per-region Gaussian blur being explored. Noteworthy papers in this area include C-LEAD, which utilizes contrastive learning for adversarial defense, and BlurGuard, which proposes a simple approach for robustifying image protection against AI-powered editing. Additionally, Parameter Interpolation Adversarial Training and Mutual Information guided Visual Contrastive Learning have demonstrated impressive results in improving model robustness and representation learning. Overall, these advancements have the potential to significantly improve the security and reliability of deep learning systems in computer vision applications.