Advances in Neural Network Robustness and Out-of-Distribution Detection

The field of neural networks is moving towards improving robustness and out-of-distribution detection. Researchers are exploring new methods to train neural networks that can generalize better and detect anomalies. One direction is to use kernel methods, such as Neural Tangent Kernel (NTK), to improve the robustness of neural networks. Another direction is to develop new techniques for out-of-distribution detection, including graph-level detection and non-linear kernel selections. Notable papers in this area include 'Training NTK to Generalize with KARE', which proposes a new method to train NTKs explicitly, and 'Just One Layer Norm Guarantees Stable Extrapolation', which shows that adding a single layer norm can improve the stability of neural networks. Other notable papers are 'SpectralGap: Graph-Level Out-of-Distribution Detection via Laplacian Eigenvalue Gaps' and 'GradPCA: Leveraging NTK Alignment for Reliable Out-of-Distribution Detection', which propose new methods for out-of-distribution detection.

Sources

Training NTK to Generalize with KARE

Adversarial Training from Mean Field Perspective

$\alpha$-GAN by R\'{e}nyi Cross Entropy

Just One Layer Norm Guarantees Stable Extrapolation

SpectralGap: Graph-Level Out-of-Distribution Detection via Laplacian Eigenvalue Gaps

Kernel PCA for Out-of-Distribution Detection: Non-Linear Kernel Selections and Approximations

GradPCA: Leveraging NTK Alignment for Reliable Out-of-Distribution Detection

Built with on top of