The field of neural networks is moving towards improving robustness and generalization, with a focus on developing innovative methods to enhance model reliability and accuracy. Recent studies have explored the benefits of sharpness-aware minimization, neuro-inspired front-ends, and distributional input projection networks in improving model calibration and robustness. Noteworthy papers include 'Towards Understanding The Calibration Benefits of Sharpness-Aware Minimization', which proposes a variant of sharpness-aware minimization to ameliorate model calibration, and 'Explicitly Modeling Subcortical Vision with a Neuro-Inspired Front-End Improves CNN Robustness', which introduces a novel front-end block that mimics the primate primary visual cortex to improve model robustness. Another notable paper is 'Towards Better Generalization via Distributional Input Projection Network', which presents a novel framework that projects inputs into learnable distributions at each layer to induce a smoother loss landscape and promote better generalization.
Advances in Neural Network Robustness and Generalization
Sources
A comparative analysis of a neural network with calculated weights and a neural network with random generation of weights based on the training dataset size
Improving Knowledge Distillation Under Unknown Covariate Shift Through Confidence-Guided Data Augmentation