Out-of-Distribution Detection Advances

The field of out-of-distribution (OOD) detection is rapidly advancing, with a growing focus on developing innovative methods to identify and localize unknown objects in safety-critical applications. Recent research has highlighted the importance of addressing the limitations of current evaluation protocols and incorporating established metrics from the Open Set community to provide deeper insights into OOD detection performance. Researchers are exploring new approaches, including variational information theoretic methods, prototypical variational autoencoders, and overlap-aware estimation of model performance under distribution shift. These methods aim to improve the reliability and accuracy of OOD detection, enabling the development of more trustworthy and safe intelligent systems. Noteworthy papers in this area include the introduction of a novel OOD scoring mechanism that leverages neuron-level relevance at the feature layer, and a memory-efficient differentially private training method that significantly reduces memory usage while maintaining utility on par with first-order DP approaches. The Enclosing Prototypical Variational Autoencoder is also a notable contribution, extending self-explainable Prototypical Variational models with autoencoder-based OOD detection to provide explainable OOD detection and improved performance on common benchmarks.

Sources

FindMeIfYouCan: Bringing Open Set metrics to $\textit{near} $, $ \textit{far} $ and $\textit{farther}$ Out-of-Distribution Object Detection

A Variational Information Theoretic Approach to Out-of-Distribution Detection

Enclosing Prototypical Variational Autoencoder for Explainable Out-of-Distribution Detection

ODD: Overlap-aware Estimation of Model Performance under Distribution Shift

NERO: Explainable Out-of-Distribution Detection with Neuron-level Relevance

Memory-Efficient Differentially Private Training with Gradient Random Projection

Built with on top of