The field of deep learning is moving towards improving the robustness and stability of neural networks. Researchers are exploring new methods to enhance the reliability of models in real-world applications, where data can be corrupted or degraded. One key direction is the development of techniques to mitigate the effects of aliasing in convolutional layers, which can impact numerical stability and statistical generalization. Another area of focus is the improvement of corruption robustness in pre-trained CNNs, with approaches that align class activation maps between clean and corrupted images. Additionally, there is a growing interest in topological generalization bounds, which relate the generalization error to notions of topological complexity, and in developing frameworks that depart from existing strategies via proof techniques rooted in algorithmic stability. Noteworthy papers in this area include: AR2, which proposes a simple yet effective method to enhance corruption robustness of pre-trained CNNs by aligning class activation maps between clean and corrupted images. Mutual Information Free Topological Generalization Bounds via Stability, which introduces a novel learning theoretic framework that formulates comprehensive and interpretable topological generalization bounds free of intractable mutual information terms.