The field of autonomous driving is moving towards more human-aligned and context-aware approaches. Researchers are leveraging large language models and vision-language models to improve decision-making and incident analysis in complex scenarios. A key direction is the development of frameworks that integrate structured reasoning and probabilistic reasoning to produce more interpretable and accurate results. Noteworthy papers include:
- Align2Act, which proposes a motion planning framework that transforms instruction-tuned large language models into interpretable planners aligned with human behavior.
- DriveCritic, which introduces a novel framework for context-aware, human-aligned evaluation of autonomous driving systems using vision-language models.