The field of multimodal document understanding is moving towards more interpretable and transparent models. Recent developments focus on integrating multiple modalities, such as text, images, and layouts, to improve performance and factual accuracy. Models that incorporate retrieval-augmented generation, holistic knowledge retrieval, and multi-hop reasoning are showing promise in achieving state-of-the-art results. Noteworthy papers in this area include MGA-VQA, which introduces a graph-based decision pathway for enhanced reasoning transparency, and ARIAL, which achieves precise answer extraction and reliable spatial grounding through an agentic framework. Additionally, papers such as HKRAG and MERGE demonstrate the importance of holistic knowledge retrieval and entity-aware retrieval-augmented generation for visually rich documents. VisionRAG, a multimodal retrieval system, also shows potential in preserving spatial cues and reducing pipeline complexity.