Advances in Vision-Language Models and Out-of-Distribution Detection

The field of vision-language models is moving towards improved handling of negation and out-of-distribution data. Researchers are proposing innovative methods to address the challenges of negation understanding and out-of-distribution detection, such as test-time adaptation and knowledge-regularized negative feature tuning. These advancements have the potential to enhance the performance and reliability of vision-language models in various applications. Noteworthy papers include Negation-Aware Test-Time Adaptation for Vision-Language Models, which proposes a method to efficiently adjust distribution-related parameters during inference, and COOkeD: Ensemble-based OOD detection in the era of zero-shot CLIP, which achieves state-of-the-art performance in out-of-distribution detection by combining the predictions of multiple classifiers.

Sources

Negation-Aware Test-Time Adaptation for Vision-Language Models

Knowledge Regularized Negative Feature Tuning for Out-of-Distribution Detection with Vision-Language Models

Pre-, In-, and Post-Processing Class Imbalance Mitigation Techniques for Failure Detection in Optical Networks

Handling Out-of-Distribution Data: A Survey

Generalized few-shot transfer learning architecture for modeling the EDFA gain spectrum

A Comprehensive Taxonomy of Negation for NLP and Neural Retrievers

COOkeD: Ensemble-based OOD detection in the era of zero-shot CLIP

ART: Adaptive Relation Tuning for Generalized Relation Prediction

Built with on top of