The field of reinforcement learning is moving towards addressing more complex and less structured problem domains, with a focus on integrating human feedback and insights to improve learning efficiency and effectiveness. This is evident in the exploration of applications in 3D visuospatial tasks, robotic manipulation settings, and everyday activities. The use of techniques such as curriculum learning, error-related human brain signals, and attention-oriented metrics is becoming increasingly prominent. These approaches aim to leverage human expertise and adaptability to enhance the learning process of reinforcement learning agents.
Noteworthy papers include: Accelerating Reinforcement Learning via Error-Related Human Brain Signals, which demonstrates the potential of integrating neural feedback to accelerate reinforcement learning in complex robotic manipulation settings. Attention Trajectories as a Diagnostic Axis for Deep Reinforcement Learning, which introduces attention-oriented metrics to investigate the development of an RL agent's attention during training. NOIR 2.0: Neural Signal Operated Intelligent Robots for Everyday Activities, which presents an enhanced brain-robot interface that allows humans to control robots for daily tasks using their brain signals with improved accuracy and efficiency.