Advances in Autonomous Driving

The field of autonomous driving is moving towards more scalable and efficient approaches, with a focus on reinforcement learning and end-to-end training architectures. Recent developments have shown that simple reward designs can lead to improved performance and scalability, enabling the training of models on large datasets and achieving state-of-the-art results. Additionally, the use of simulation and world models is becoming increasingly important, allowing for the training of driving policies in a more efficient and effective manner.

Noteworthy papers include:

  • CaRL: Learning Scalable Planning Policies with Simple Rewards, which proposes a new reward design that enables scalable training of reinforcement learning models.
  • Learning to Drive from a World Model, which presents an end-to-end training architecture that uses real driving data to train a driving policy in an on-policy simulator.

Sources

CaRL: Learning Scalable Planning Policies with Simple Rewards

The Autonomous Software Stack of the FRED-003C: The Development That Led to Full-Scale Autonomous Racing

Imitation Learning for Autonomous Driving: Insights from Real-World Testing

Learning to Drive from a World Model

Built with on top of