Advancements in Reinforcement Learning and Control

The field of reinforcement learning and control is rapidly advancing, with a focus on developing more efficient, robust, and adaptable algorithms. Recent research has explored the use of quantum computing, adversarial robustness, and multi-agent systems to improve the performance of reinforcement learning agents. Additionally, there has been a growing interest in developing more advanced control architectures, such as those using model predictive control and nonlinear model predictive control, to improve the stability and robustness of systems. Notable papers in this area include 'Quantum Boltzmann Machines for Sample-Efficient Reinforcement Learning', which introduces a novel quantum-classical model for reinforcement learning, and 'Adversarially Robust Multitask Adaptive Control', which proposes a clustered multitask approach to mitigate corrupted model updates. Other significant contributions include the development of novel control architectures, such as 'Stable and Robust SLIP Model Control via Energy Conservation-Based Feedback Cancellation for Quadrupedal Applications' and 'A Tilting-Rotor Enhanced Quadcopter Fault-Tolerant Control Based on Non-Linear Model Predictive Control'. These advancements have the potential to significantly impact a wide range of applications, from robotics and autonomous systems to finance and healthcare.

Sources

Quantum Boltzmann Machines for Sample-Efficient Reinforcement Learning

Adversarially Robust Multitask Adaptive Control

Stable and Robust SLIP Model Control via Energy Conservation-Based Feedback Cancellation for Quadrupedal Applications

A Tilting-Rotor Enhanced Quadcopter Fault-Tolerant Control Based on Non-Linear Model Predictive Control

SAD-Flower: Flow Matching for Safe, Admissible, and Dynamically Consistent Planning

Partial Action Replacement: Tackling Distribution Shift in Offline MARL

Distributed Adaptive Estimation over Sensor Networks with Partially Unknown Source Dynamics

Comparative Study of Q-Learning for State-Feedback LQG Control with an Unknown Model

Algorithm-Relative Trajectory Valuation in Policy Gradient Control

Dual-MPC Footstep Planning for Robust Quadruped Locomotion

Test-driven Reinforcement Learning

Multi-layer barrier function-based adaptive super-twisting controller

PrefPoE: Advantage-Guided Preference Fusion for Learning Where to Explore

X-IONet: Cross-Platform Inertial Odometry Network with Dual-Stage Attention

Beyond Distributions: Geometric Action Control for Continuous Reinforcement Learning

Stability of Certainty-Equivalent Adaptive LQR for Linear Systems with Unknown Time-Varying Parameters

Learning Omnidirectional Locomotion for a Salamander-Like Quadruped Robot

LPPG-RL: Lexicographically Projected Policy Gradient Reinforcement Learning with Subproblem Exploration

Recursive Binary Identification under Data Tampering and Non-Persistent Excitation with Application to Emission Control

Interpretable by Design: Query-Specific Neural Modules for Explainable Reinforcement Learning

Information-Driven Fault Detection and Identification for Multi-Agent Spacecraft Systems: Collaborative On-Orbit Inspection Mission

Diffusion Policies with Value-Conditional Optimization for Offline Reinforcement Learning

APEX: Action Priors Enable Efficient Exploration for Robust Motion Tracking on Legged Robots

Data Fusion-Enhanced Decision Transformer for Stable Cross-Domain Generalization

Robust Estimation and Control for Heterogeneous Multi-agent Systems Based on Decentralized k-hop Prescribed Performance Observers

MARBLE: Multi-Armed Restless Bandits in Latent Markovian Environment

Quantum Meet-in-the-Middle Attacks on Key-Length Extension Constructions

Quasi-Newton Compatible Actor-Critic for Deterministic Policies

Built with on top of