Advances in Learning-Based Control and Optimization

The field of control and optimization is witnessing significant developments, with a focus on learning-based approaches and innovative applications of existing techniques. Researchers are exploring new methods to improve the stability and efficiency of control systems, particularly in the context of nonlinear and time-varying inputs. Notably, the integration of machine learning and control theory is leading to the creation of more robust and adaptive control systems. Furthermore, advances in optimization techniques, such as the use of gradient flow and Lyapunov equations, are enabling more effective solutions to complex control problems. Some papers are particularly noteworthy for their innovative contributions, including:

  • One paper that generalizes Lagrangian Equilibrium Propagation to arbitrary boundary conditions and establishes its equivalence with Hamiltonian Echo Learning.
  • Another paper that presents a novel method for computing the optimal feedback gain of the infinite-horizon Linear Quadratic Regulator problem via an ordinary differential equation, bridging the gap between LQR and Reinforcement Learning.

Sources

Lagrangian-based Equilibrium Propagation: generalisation to arbitrary boundary conditions & equivalence with Hamiltonian Echo Learning

Support bound for differential elimination in polynomial dynamical systems

Bridging Continuous-time LQR and Reinforcement Learning via Gradient Flow of the Bellman Error

Learning-Based Stable Optimal Control for Infinite-Time Nonlinear Regulation Problems

Synthesizing Min-Max Control Barrier Functions For Switched Affine Systems

Penalty-Based Feedback Control and Finite Element Analysis for the Stabilization of Nonlinear Reaction-Diffusion Equations

The LLLR generalised Langton's ant

Built with on top of