The field of autonomous driving is rapidly advancing, with a focus on improving safety, trustworthiness, and generalization capabilities. Recent developments have explored the use of vision-language models (VLMs) to enhance driving decision-making, with applications in risk perception, driver attention, and scene understanding. Notably, researchers have introduced novel frameworks such as GraphPilot, which conditions language-based driving models on structured relational context, and VLA-R, an open-world end-to-end autonomous driving framework that integrates open-world perception with a novel vision-action retrieval paradigm. These innovations have shown significant improvements in driving performance, with some models achieving up to 15.6% increase in driving score. Furthermore, the development of benchmarks such as DSBench has highlighted the importance of evaluating VLMs' awareness of various safety risks in a unified manner. Overall, the field is moving towards more robust, interpretable, and generalizable autonomous driving systems. Noteworthy papers include GraphPilot, which achieved a 15.6% increase in driving score, and VLA-R, which demonstrated strong generalization and exploratory performance in unstructured environments.