Advances in Medical Image Analysis with Vision-Language Models

The field of medical image analysis is rapidly advancing with the development of vision-language models (VLMs) that can integrate visual and textual information to improve diagnostic accuracy. Recent research has focused on applying VLMs to specialized domains such as dermatology, gastroenterology, and colonoscopy, where subtle regional findings and specialized clinical knowledge are crucial for accurate diagnosis. The use of large vision-language models, such as MedDAM and SkinVL, has shown promising results in generating region-specific descriptions and detecting skin diseases. Additionally, the introduction of new datasets, such as MM-Skin and Gut-VLM, has facilitated the development of more accurate and robust VLMs. Noteworthy papers in this area include MedDAM, which proposes a comprehensive framework for region-specific captioning in medical images, and MM-Skin, which introduces a large-scale multimodal dermatology dataset. Other notable papers include Hallucination-Aware Multimodal Benchmark for Gastrointestinal Image Analysis, which addresses the issue of hallucination in VLMs, and MAKE, which introduces a multi-aspect knowledge-enhanced vision-language pretraining framework for zero-shot dermatological tasks.

Sources

Describe Anything in Medical Images

MM-Skin: Enhancing Dermatology Vision-Language Model with an Image-Text Dataset Derived from Textbooks

Hallucination-Aware Multimodal Benchmark for Gastrointestinal Image Analysis with Large Vision-Language Models

A Vision-Language Foundation Model for Leaf Disease Identification

Optimizing Neuro-Fuzzy and Colonial Competition Algorithms for Skin Cancer Diagnosis in Dermatoscopic Images

MAKE: Multi-Aspect Knowledge-Enhanced Vision-Language Pretraining for Zero-shot Dermatological Assessment

Endo-CLIP: Progressive Self-Supervised Pre-training on Raw Colonoscopy Records

DeepSeqCoco: A Robust Mobile Friendly Deep Learning Model for Detection of Diseases in Cocos nucifera

Built with on top of