Advancements in Autonomous Systems and Reinforcement Learning

The field of autonomous systems and reinforcement learning is rapidly evolving, with a focus on developing more efficient, adaptive, and robust methods. Recent research has explored the use of end-to-end frameworks, online adaptation, and compositional learning to improve the performance of autonomous systems in complex and dynamic environments. Notably, the development of novel reinforcement learning algorithms and frameworks has enabled more effective adaptation to changing conditions and improved generalization to unseen tasks.

One of the key trends in this area is the integration of reinforcement learning with other techniques, such as model-based control and transfer learning, to enhance the efficiency and robustness of autonomous systems. Additionally, there is a growing interest in developing methods that can learn from raw sensory data, such as images and sensor readings, and adapt to new environments and tasks.

Some notable papers in this area include:

  • YOPOv2-Tracker, which proposes an end-to-end agile tracking and navigation framework for quadrotors that directly maps sensory observations to control commands.
  • Drive Fast, Learn Faster, which introduces a robust on-board RL framework for autonomous racing that eliminates the dependency on simulation-based pre-training.
  • Reinforcement Learning-based Fault-Tolerant Control for Quadrotor with Online Transformer Adaptation, which presents a novel hybrid RL-based FTC framework integrated with a transformer-based online adaptation module.
  • Automatic Curriculum Learning for Driving Scenarios, which proposes an automatic curriculum learning framework that dynamically generates driving scenarios with adaptive complexity based on the agent's evolving capabilities.
  • Modeling Unseen Environments with Language-guided Composable Causal Components in Reinforcement Learning, which introduces a novel framework that enhances RL generalization by learning and leveraging compositional causal components.

Sources

YOPOv2-Tracker: An End-to-End Agile Tracking and Navigation Framework from Perception to Action

Drive Fast, Learn Faster: On-Board RL for High Performance Autonomous Racing

Reinforcement Learning-based Fault-Tolerant Control for Quadrotor with Online Transformer Adaptation

Automatic Curriculum Learning for Driving Scenarios: Towards Robust and Efficient Reinforcement Learning

Modeling Unseen Environments with Language-guided Composable Causal Components in Reinforcement Learning

Continual Reinforcement Learning via Autoencoder-Driven Task and New Environment Recognition

\textsc{rfPG}: Robust Finite-Memory Policy Gradients for Hidden-Model POMDPs

Out-of-distribution generalisation is hard: evidence from ARC-like tasks

General Dynamic Goal Recognition

Efficient Adaptation of Reinforcement Learning Agents to Sudden Environmental Change

Knowledge capture, adaptation and composition (KCAC): A framework for cross-task curriculum learning in robotic manipulation

Built with on top of