The field of document understanding is moving towards a more holistic approach, incorporating multimodal retrieval and reasoning to unlock comprehensive document intelligence. This shift is driven by the need to overcome the limitations of current approaches, which either lose structural detail or struggle with context modeling. The use of Retrieval-Augmented Generation (RAG) is becoming increasingly prominent, enabling models to ground themselves in external data and improve their performance. Noteworthy papers in this area include: Scaling Beyond Context, which presents a systematic survey of Multimodal RAG for document understanding. Fine-Tuning MedGemma for Clinical Captioning, which proposes a framework to specialize the MedGemma model for generating high-fidelity captions. Towards Mixed-Modal Retrieval for Universal Retrieval-Augmented Generation, which introduces a unified mixed-modal to mixed-modal retriever tailored for Universal Retrieval-Augmented Generation (URAG) scenarios.