The field of document understanding and analysis is witnessing significant advancements, driven by the development of innovative models and techniques. Researchers are focusing on improving the accuracy and efficiency of document processing, with a particular emphasis on handling complex visual, textual, and layout information. Notably, multimodal large language models (MLLMs) are being explored for their potential to extract and interpret information in document images. These models are being designed to encode and fuse textual, visual, and layout features, and are being trained using various paradigms to enhance their performance. The use of relative polar coordinate encoding, content-aware vision tokenization, and zero-shot key information extraction are some of the notable approaches being investigated. Overall, these developments are paving the way for more accurate and robust document understanding and analysis systems. Some noteworthy papers in this area include: DocPolarBERT, which achieves state-of-the-art results despite being pre-trained on a smaller dataset. VDInstruct, which introduces a content-aware tokenization strategy to improve key information extraction. DeQA-Doc, which adapts a state-of-the-art MLLM-based image quality scorer for document quality assessment and achieves significant performance gains.