The field of robotic control is witnessing significant developments with the integration of diffusion policies, which have shown great promise in improving reinforcement learning. Researchers are exploring innovative methods to optimize diffusion-based policies, addressing challenges such as stability, efficiency, and generalization. A key direction is the development of adaptive gradient descent methods, enabling faster and more stable fine-tuning of diffusion policies. Another area of focus is the simulation-to-real transfer of learned policies, leveraging privileged information to improve navigation in dynamic environments. Additionally, studies are investigating policy distillation under privileged information, aiming to address information asymmetry and distributional shifts. Noteworthy papers include:
- Adaptive Diffusion Policy Optimization for Robotic Manipulation, which proposes an Adam-based optimization framework for fine-tuning diffusion policies.
- NavDP: Learning Sim-to-Real Navigation Diffusion Policy with Privileged Information Guidance, which introduces an end-to-end framework for navigation in dynamic environments.
- Distilling Realizable Students from Unrealizable Teachers, which presents methods for policy distillation under privileged information.
- Fine-tuning Diffusion Policies with Backpropagation Through Diffusion Timesteps, which reformulates diffusion policies as noise-conditioned deterministic policies for improved optimization.