Efficient Visual Navigation and Autonomous Driving

The field of visual navigation and autonomous driving is moving towards more efficient and adaptive models. Recent developments focus on dynamic feature and layer selection, improved early exit decisions, and unified representations for trajectory planning. These advancements aim to reduce computational overhead, improve interpretability, and enhance reliability in resource-tight scenarios. Notable papers in this area include DynaNav, which achieves a 2.26x reduction in FLOPs and 42.3% lower inference time, and Nav-EE, which reduces latency by up to 63.9% while maintaining accuracy. Additionally, BEV-VLM and Max-V1 demonstrate significant improvements in planning accuracy and end-to-end trajectory prediction, respectively. These innovative approaches have the potential to enable more efficient and capable self-driving agents.

Sources

DynaNav: Dynamic Feature and Layer Selection for Efficient Visual Navigation

Beyond Greedy Exits: Improved Early Exit Decisions for Risk Control and Reliability

BEV-VLM: Trajectory Planning via Unified BEV Abstraction

Less is More: Lean yet Powerful Vision-Language Model for Autonomous Driving

VLOD-TTA: Test-Time Adaptation of Vision-Language Object Detectors

Non-submodular Visual Attention for Robot Navigation

Nav-EE: Navigation-Guided Early Exiting for Efficient Vision-Language Models in Autonomous Driving

Built with on top of