Advances in Vision-Language Models

The field of vision-language models is rapidly evolving, with a focus on improving multimodal understanding and generation capabilities. Recent developments have explored the use of self-refinement frameworks, cross-modal guidance, and unified post-training paradigms to enhance model performance. Notably, the integration of visual and language understanding has led to significant advancements in tasks such as image-text generation and visual question answering. However, challenges persist, including the mitigation of hallucinations and the development of more robust evaluation benchmarks.

Some noteworthy papers in this area include: Towards Self-Refinement of Vision-Language Models with Triangular Consistency, which proposes a self-refinement framework for vision-language models. Watermarking for Factuality: Guiding Vision-Language Models Toward Truth via Tri-layer Contrastive Decoding, which introduces a training-free decoding method to reduce hallucinations in vision-language models.

Sources

Towards Self-Refinement of Vision-Language Models with Triangular Consistency

When Images Speak Louder: Mitigating Language Bias-induced Hallucinations in VLMs through Cross-Modal Guidance

ViSurf: Visual Supervised-and-Reinforcement Fine-Tuning for Large Vision-and-Language Models

IUT-Plug: A Plug-in tool for Interleaved Image-Text Generation

GIR-Bench: Versatile Benchmark for Generating Images with Reasoning

Vision Language Models Map Logos to Text via Semantic Entanglement in the Visual Projector

Beyond Seeing: Evaluating Multimodal LLMs on Tool-Enabled Image Perception, Transformation, and Reasoning

SRUM: Fine-Grained Self-Rewarding for Unified Multimodal Models

Self-Augmented Visual Contrastive Decoding

Uni-MMMU: A Massive Multi-discipline Multimodal Unified Benchmark

Generative Universal Verifier as Multimodal Meta-Reasoner

Watermarking for Factuality: Guiding Vision-Language Models Toward Truth via Tri-layer Contrastive Decoding

Built with on top of