The field of deep learning is moving towards improving adversarial robustness and efficient model updates. Recent developments have focused on designing models that can withstand adversarial attacks, which are small perturbations to the input data that can cause the model to misbehave. Researchers have proposed various methods to achieve this, including risk-calibrated approaches to streaming intrusion detection and novel Wasserstein distributional attacks. Additionally, there is a growing interest in efficient model updates, such as transferring knowledge across pre-trained models and updating models using only a handful of labeled samples. Noteworthy papers in this area include 'Risk-Calibrated Bayesian Streaming Intrusion Detection with SRE-Aligned Decisions' and 'Gradient-Sign Masking for Task Vector Transport Across Pre-Trained Models', which demonstrate improved precision-recall and significant performance gains on vision and language benchmarks, respectively.