The field of vision-language models is moving towards improved handling of negation and out-of-distribution data. Researchers are proposing innovative methods to address the challenges of negation understanding and out-of-distribution detection, such as test-time adaptation and knowledge-regularized negative feature tuning. These advancements have the potential to enhance the performance and reliability of vision-language models in various applications. Noteworthy papers include Negation-Aware Test-Time Adaptation for Vision-Language Models, which proposes a method to efficiently adjust distribution-related parameters during inference, and COOkeD: Ensemble-based OOD detection in the era of zero-shot CLIP, which achieves state-of-the-art performance in out-of-distribution detection by combining the predictions of multiple classifiers.