The field of multimodal research is moving towards integrating large-scale knowledge databases into generation models, improving their performance and ability to handle dynamic real-world applications. This is achieved through retrieval mechanisms that allow models to access and verify information against up-to-date evidence, reducing hallucinations and improving factual accuracy. Another direction is the development of frameworks that enable continuous learning and adaptation to new datasets, allowing models to accumulate knowledge and improve their performance on previously unseen scenarios. Noteworthy papers include: mRAG, which systematically dissects the multimodal retrieval-augmented generation pipeline and yields substantial insights, resulting in an average performance boost of 5% without fine-tuning. V2X-UniPool, which unifies multimodal perception and knowledge reasoning for autonomous driving, significantly enhancing motion planning accuracy and reasoning capability. Gen-n-Val, which introduces a novel agentic data generation framework that leverages layer diffusion and large language models to produce high-quality synthetic data, reducing invalid data from 50% to 7% and improving performance by 1% mAP on rare classes.