The field of out-of-distribution (OOD) detection is rapidly advancing, with a growing focus on developing innovative methods to identify and localize unknown objects in safety-critical applications. Recent research has highlighted the importance of addressing the limitations of current evaluation protocols and incorporating established metrics from the Open Set community to provide deeper insights into OOD detection performance. Researchers are exploring new approaches, including variational information theoretic methods, prototypical variational autoencoders, and overlap-aware estimation of model performance under distribution shift. These methods aim to improve the reliability and accuracy of OOD detection, enabling the development of more trustworthy and safe intelligent systems. Noteworthy papers in this area include the introduction of a novel OOD scoring mechanism that leverages neuron-level relevance at the feature layer, and a memory-efficient differentially private training method that significantly reduces memory usage while maintaining utility on par with first-order DP approaches. The Enclosing Prototypical Variational Autoencoder is also a notable contribution, extending self-explainable Prototypical Variational models with autoencoder-based OOD detection to provide explainable OOD detection and improved performance on common benchmarks.