The field of large language models (LLMs) is rapidly evolving, with a growing focus on incorporating external knowledge to improve factual accuracy and reasoning capabilities. Recent developments have centered around the integration of knowledge graphs, databases, and other structured data sources to enhance retrieval-augmented generation (RAG) systems. This has led to the creation of more sophisticated frameworks that can handle complex queries, temporal and causal relationships, and multi-hop reasoning. Notable advancements include the development of hierarchical lexical graphs, entity-event knowledge graphs, and task-driven tokens for continual knowledge graph embedding. These innovations have significantly improved the performance of RAG systems, enabling them to provide more accurate and informative responses. Noteworthy papers in this area include the introduction of GraphRAG-Bench, a comprehensive benchmark for evaluating graph retrieval-augmented generation models, and the proposal of E^2RAG, a dual-graph framework that preserves temporal and causal facets for fine-grained reasoning. The development of KG-Infused RAG, which integrates knowledge graphs into RAG systems, has also shown promising results. Furthermore, the introduction of RAPL, a novel framework for efficient and effective graph retrieval in knowledge graph question answering, has demonstrated superior retrieval capability and generalizability.