The field of machine learning is moving towards more efficient and adaptive methods, particularly in areas such as human-in-the-loop learning and optimization. Recent developments have focused on improving the adaptability of models to complex target distributions and incorporating human preferences and expert knowledge into the learning process. Notably, researchers are exploring ways to reduce the complexity of normalizing flows for MCMC preconditioning and developing novel methods for Bayesian optimization that allow for online steering via user input. Additionally, there is a growing interest in hybrid frameworks that combine the strengths of different approaches, such as reinforcement learning from human feedback and Bayesian preference inference. These advances have the potential to improve the efficiency and effectiveness of machine learning models in a variety of applications. Noteworthy papers include: Deployable Vision-driven UAV River Navigation via Human-in-the-loop Preference Alignment, which introduces a novel method for human-in-the-loop learning with a conservative overseer. Reducing normalizing flow complexity for MCMC preconditioning, which proposes a factorized preconditioning architecture that reduces NF complexity. Dynamic Priors in Bayesian Optimization for Hyperparameter Optimization, which enables repeated interventions to steer BO via user input. Efficient Reinforcement Learning from Human Feedback via Bayesian Preference Inference, which unifies RLHF's scalability with PBO's query efficiency.