The field of geospatial intelligence in large language models is rapidly advancing, with a focus on improving the accuracy and reliability of geospatial knowledge and reasoning tasks. Recent developments have highlighted the importance of mitigating geospatial hallucinations, which can compromise the trustworthiness of large language models. Researchers are exploring innovative approaches to evaluate and improve the spatial reasoning capabilities of large language models, including the development of new benchmarks and evaluation frameworks. Notable papers in this area include: Mitigating Geospatial Knowledge Hallucination in Large Language Models: Benchmarking and Dynamic Factuality Aligning, which proposes a comprehensive evaluation framework and a dynamic factuality aligning method to mitigate geospatial hallucinations. MazeEval: A Benchmark for Testing Sequential Decision-Making in Language Models, which introduces a benchmark for evaluating pure spatial reasoning in large language models through coordinate-based maze navigation tasks.