The field of natural language processing is witnessing significant advancements in multimodal and multilingual retrieval-augmented generation. Researchers are exploring innovative approaches to improve the accuracy and factual consistency of generated text, particularly in low-resource languages and multimodal settings. One notable direction is the development of adaptive retrieval strategies, which enable models to effectively utilize external knowledge sources and reduce computational overhead. Additionally, there is a growing focus on evaluating and mitigating length biases in quality estimation metrics, as well as improving the factual consistency of generated images. Noteworthy papers in this area include: Penalizing Length: Uncovering Systematic Bias in Quality Estimation Metrics, which proposes strategies to mitigate length biases in QE metrics. Open Multimodal Retrieval-Augmented Factual Image Generation, which introduces an agentic open multimodal retrieval-augmented framework for factual image generation. Windsock is Dancing: Adaptive Multimodal Retrieval-Augmented Generation, which proposes a query-dependent module for adaptive retrieval and modality selection. CRAG-MM: Multi-modal Multi-turn Comprehensive RAG Benchmark, which presents a comprehensive benchmark for multi-modal multi-turn conversations.