Advances in Adaptive Learning and Stochastic Systems

The field of adaptive learning and stochastic systems is moving towards developing more robust and efficient algorithms for handling complex, real-world data. Researchers are focusing on creating frameworks that can combine offline and online learning, allowing for better adaptation to changing environments and improved prediction performance. Additionally, there is a growing interest in stochastic data-driven tasks, such as smoothing, prediction, and control, which can be addressed through unified Bayesian frameworks. Another key area of research is the development of methods for handling partial observability and perturbations in reinforcement learning, where techniques like causal state representation and diffusion-based methods are showing promising results. Noteworthy papers in this area include the proposal of a two-stage learning framework that integrates offline and online algorithms for nonlinear stochastic dynamical systems, and the introduction of a framework that enhances reinforcement learning algorithms by uncovering the underlying causal structure of partially observable Markov decision processes. Furthermore, researchers are exploring the use of Bayesian dynamic scheduling for multipurpose batch processes, and the development of accelerated probabilistic response-time analysis methods for safety-critical real-time applications.

Sources

Adaptive prediction theory combining offline and online learning

Learning Causal States Under Partial Observability and Perturbation

An Empirical Study on the Effectiveness of Incorporating Offline RL As Online RL Subroutines

Algorithmic Guarantees for Distilling Supervised and Offline RL Datasets

Approximating Analytically-Intractable Likelihood Densities with Deterministic Arithmetic for Optimal Particle Filtering

Bayesian dynamic scheduling of multipurpose batch processes under incomplete look-ahead information

Accelerating Probabilistic Response-Time Analysis: Revised Critical Instant and Optimized Convolution

A Unified Bayesian Framework for Stochastic Data-Driven Smoothing, Prediction, and Control

Forecasting in Offline Reinforcement Learning for Non-stationary Environments

Cross-Domain Offline Policy Adaptation with Dynamics- and Value-Aligned Data Filtering

Dual-Robust Cross-Domain Offline Reinforcement Learning Against Dynamics Shifts

Tempering the Bayes Filter towards Improved Model-Based Estimation

Dynamic Correction of Erroneous State Estimates via Diffusion Bayesian Exploration

Guided Flow Policy: Learning from High-Value Actions in Offline Reinforcement Learning

Long-Horizon Model-Based Offline Reinforcement Learning Without Conservatism

Stability-Guaranteed Dual Kalman Filtering for Electrochemical Battery State Estimation

Built with on top of