Multimodal and Multilingual Advances in Language Models

The field of natural language processing is witnessing significant advancements in multimodal and multilingual capabilities of language models. Recent developments indicate a shift towards more inclusive and diverse language models that can handle a wide range of languages and tasks. The focus is on creating models that can generalize well across different languages and cultures, and can perform tasks such as machine translation, question answering, and text generation with high accuracy. Noteworthy papers in this area include IndicVisionBench, which introduces a benchmark for evaluating vision-language models in culturally diverse and multilingual settings, and mmJEE-Eval, which proposes a bilingual multimodal benchmark for evaluating scientific reasoning in vision-language models. These papers demonstrate the potential of multimodal and multilingual language models to advance the field and enable more effective communication across languages and cultures.

Sources

IndicVisionBench: Benchmarking Cultural and Multilingual Understanding in VLMs

Reasoning-Guided Claim Normalization for Noisy Multilingual Social Media Posts

Mind the Gap... or Not? How Translation Errors and Evaluation Details Skew Multilingual Results

Translation via Annotation: A Computational Study of Translating Classical Chinese into Japanese

A multimodal multiplex of the mental lexicon for multilingual individuals

It Takes Two: A Dual Stage Approach for Terminology-Aware Translation

AI-Driven Contribution Evaluation and Conflict Resolution: A Framework & Design for Group Workload Investigation

Quantification and object perception in Multimodal Large Language Models deviate from human linguistic cognition

VietMEAgent: Culturally-Aware Few-Shot Multimodal Explanation for Vietnamese Visual Question Answering

mmJEE-Eval: A Bilingual Multimodal Benchmark for Evaluating Scientific Reasoning in Vision-Language Models

MTQ-Eval: Multilingual Text Quality Evaluation for Language Models

Multimodal Large Language Models for Low-Resource Languages: A Case Study for Basque

NSL-MT: Linguistically Informed Negative Samples for Efficient Machine Translation in Low-Resource Languages

Built with on top of