Advances in Multimodal Reasoning and Perception

The field of multimodal large language models (MLLMs) is rapidly advancing, with a focus on improving reasoning and perception capabilities. Recent developments have highlighted the importance of fine-grained visual perception, with several benchmarks and datasets being introduced to evaluate MLLMs' performance in this area. These include VisuRiddles, HueManity, and Do You See Me, which assess MLLMs' ability to recognize and understand abstract graphics, nuanced perceptual tasks, and visual perception errors. Noteworthy papers in this area include VisuRiddles, which introduces a benchmark for abstract visual reasoning and fine-grained perception, and SemVink, which proposes a multimodal evaluation model for bidirectional generation between image and text. Additionally, there is a growing interest in applying MLLMs to real-world applications, such as disaster damage assessment and Humanities and Social Sciences tasks, with benchmarks like HSSBench and MMRB being introduced to evaluate MLLMs' performance in these areas.

Sources

VisualSphinx: Large-Scale Synthetic Vision Logic Puzzles for RL

Period-LLM: Extending the Periodic Capability of Multimodal Large Language Model

Agent-X: Evaluating Deep Multimodal Reasoning in Vision-Centric Agentic Tasks

Do You See Me : A Multidimensional Benchmark for Evaluating Visual Perception in Multimodal LLMs

Fire360: A Benchmark for Robust Perception and Episodic Memory in Degraded 360-Degree Firefighting Videos

Entity Image and Mixed-Modal Image Retrieval Datasets

Minos: A Multimodal Evaluation Model for Bidirectional Generation Between Image and Text

VisuRiddles: Fine-grained Perception is a Primary Bottleneck for Multimodal Large Language Models in Abstract Visual Reasoning

SemVink: Advancing VLMs' Semantic Understanding of Optical Illusions via Visual Global Thinking

HueManity: Probing Fine-Grained Visual Perception in MLLMs

A Multimodal, Multilingual, and Multidimensional Pipeline for Fine-grained Crowdsourcing Earthquake Damage Evaluation

HSSBench: Benchmarking Humanities and Social Sciences Ability for Multimodal Large Language Models

Evaluating MLLMs with Multimodal Multi-image Reasoning Benchmark

Do Large Language Models Judge Error Severity Like Humans?

Built with on top of