The field of neural networks and deep learning is moving towards a deeper understanding of adversarial attacks and vulnerabilities. Recent research has focused on developing more effective methods for generating transferable adversarial perturbations, with a shift towards targeting specific neurons within the network. This approach has shown promise in disrupting the core units of the neural network, providing a common basis for transferability across different models. Additionally, there is a growing interest in understanding the fundamental mechanisms behind adversarial examples, with some research suggesting that superposition may be a major contributing factor. The surjectivity of neural networks is also being explored, with implications for model safety and jailbreak vulnerabilities. Furthermore, the feasibility of gradient inversion attacks in federated learning is being investigated, with a focus on understanding the impact of architectural choices and operational modes on privacy risks. Noteworthy papers include: NAT, which introduces a method for targeting specific neurons to enhance adversarial transferability, achieving impressive fooling rates in cross-model and cross-domain settings. The paper on surjectivity of neural networks proves that many modern neural architectures are almost always surjective, implying that any specified output can be generated by some input, raising concerns about model safety. The paper on gradient inversion attacks in federated learning systematically analyzes the impact of architecture and training behavior on vulnerability, introducing novel attacks and providing actionable insight into when models are likely vulnerable.