The field of machine learning security and generative models is rapidly evolving, with a focus on developing innovative methods to improve the robustness and reliability of models. Recent research has explored the use of evolutionary algorithms and generative adversarial networks to craft adversarial attacks and improve the training of generative models. Notable advancements include the development of novel techniques to reverse-engineer memory mappings and amplify hammering intensity in Rowhammer attacks, as well as the introduction of evolutionary-based algorithms to solve discrete optimization problems in graph neural networks. Additionally, research has highlighted the importance of on-manifold perturbations for realistic adversarial attacks on tabular data and the effectiveness of co-evolutionary approaches in training generative adversarial networks. Some papers are particularly noteworthy, including GPUHammer, which demonstrates the first successful Rowhammer attack on a discrete GPU, and EvA, which shows significant improvements in attacking graph neural networks. Crafting Imperceptible On-Manifold Adversarial Attacks for Tabular Data is also notable for its latent space perturbation framework using a mixed-input Variational Autoencoder to generate imperceptible adversarial examples.