The field of machine learning and state estimation is rapidly evolving, with a focus on developing innovative methods for approximating complex systems and improving estimation accuracy. Recent research has explored the use of deep neural networks for approximating Young measures, which enable the reformulation of non-convex variational problems as convex problems. Additionally, there has been a growing interest in online learning techniques for non-stationary data streams, which is crucial for applications such as fusion science. Reinforcement learning has also been applied to data assimilation for unknown state models, allowing for the learning of surrogate state transition models directly from noisy observations. Furthermore, researchers have proposed new methods for offline reinforcement learning, such as adaptive neighborhood-constrained Q learning, which restricts action selection to the union of neighborhoods of dataset actions. Noteworthy papers include: Approximating Young Measures With Deep Neural Networks, which develops a deep neural network approximation of Young measures, Uncertainty Guided Online Ensemble for Non-stationary Data Streams in Fusion Science, which proposes an uncertainty guided online ensemble method for improving prediction performance, Reinforcement learning based data assimilation for unknown state model, which integrates reinforcement learning with ensemble-based Bayesian filtering methods, Adaptive Neighborhood-Constrained Q Learning for Offline Reinforcement Learning, which proposes a new neighborhood constraint for restricting action selection, Enhancing Q-Value Updates in Deep Q-Learning via Successor-State Prediction, which explicitly models environment dynamics using a stochastic transition model.