Photonic Spiking Reinforcement Learning for Robotic Control

The field of robotic control is witnessing a significant shift towards the integration of photonic spiking reinforcement learning (RL) and neuromorphic hardware. This emerging trend promises to overcome the limitations of traditional electronic computing platforms, which often struggle to meet the stringent demands of real-time interaction and energy efficiency. By leveraging the strengths of photonic computing and spiking neural networks, researchers are developing innovative architectures that can efficiently handle high-dimensional state spaces and complex control tasks. Noteworthy papers in this area include:

  • A study that applied a photonic spiking RL system to robotic continuous control tasks, achieving a 23.33% reduction in convergence steps and an energy efficiency of 1.39 TOPS/W.
  • A project that demonstrated the first on-orbit deployment of RL-based autonomous control of a free-flying robot, paving the way for future work in In-Space Servicing, Assembly, and Manufacturing.
  • A pipeline for deploying RL policies on neuromorphic hardware, enabling low-latency and energy-efficient inference for robotic control tasks.

Sources

Hardware-Software Collaborative Computing of Photonic Spiking Reinforcement Learning for Robotic Continuous Control

Autonomous Planning In-space Assembly Reinforcement-learning free-flYer (APIARY) International Space Station Astrobee Testing

Crossing the Sim2Real Gap Between Simulation and Ground Testing to Space Deployment of Autonomous Free-flyer Control

Autonomous Reinforcement Learning Robot Control with Intel's Loihi 2 Neuromorphic Hardware

Built with on top of