The field of Knowledge Graph Question Answering (KGQA) is moving towards more efficient and effective methods for multi-hop reasoning. Recent developments have focused on improving the planning capabilities of Large Language Models (LLMs) and enhancing their ability to reason over structured knowledge graphs. Notable advancements include the use of exemplar-guided planning, multi-view knowledge-graph-based retrieval-augmented generation, and dual-track knowledge graph verification frameworks. These approaches have shown significant improvements in accuracy and efficiency on various KGQA datasets. Noteworthy papers in this area include: Exemplar-Guided Planning, which enhances the planning capabilities of LLM agents for KGQA by retrieving highly similar exemplary questions and their successful reasoning paths. Think Parallax, which proposes a framework that symmetrically decouples queries and graph triples into multi-view spaces, enabling a robust retrieval architecture that explicitly enforces head diversity. DTKG, which introduces a novel dual-track KG verification and reasoning framework that addresses the limitations of current multi-hop reasoning approaches. Think Straight, Stop Smart, which proposes a structured multi-hop RAG framework designed for efficiency, introducing a template-based reasoning and a retriever-based terminator. Interpretable Question Answering with Knowledge Graphs, which presents a question answering system that operates exclusively on a knowledge graph retrieval without relying on retrieval augmented generation with large language models. Hierarchical Sequence Iteration for Heterogeneous Question Answering, which introduces a unified framework that linearizes documents, tables, and knowledge graphs into a reversible hierarchical sequence with lightweight structural tags. GlobalRAG, which proposes a reinforcement learning framework designed to enhance global reasoning in multi-hop QA, decomposing questions into subgoals, coordinating retrieval with reasoning, and refining evidence iteratively. Plan Then Retrieve, which proposes a novel two-stage reinforcement fine-tuning KGQA framework that enables LLMs to perform autonomous planning and adaptive retrieval scheduling across KG and web sources under incomplete knowledge conditions.