The field of Bayesian inference and optimization is rapidly advancing with the development of new methods and techniques. Recent research has focused on improving the efficiency and effectiveness of Bayesian optimization, with a particular emphasis on reducing the computational cost and increasing the scalability of these methods. One key area of innovation is the use of deep generative models to directly sample from the posterior over the optimum point, eliminating the need for expensive re-training and optimization steps. Additionally, there has been significant progress in the development of prior-fitted networks, which enable the efficient allocation of pre-training compute to low-data scenarios and have the potential to play a major role in addressing data-scarce problems. Other notable advancements include the introduction of model-informed flows for Bayesian inference, which deliver tighter posterior approximations and state-of-the-art performance across a range of benchmarks. Noteworthy papers in this area include the work on Simplifying Bayesian Optimization Via In-Context Direct Optimum Sampling, which achieves an efficiency gain of over 35x in terms of wall-clock time compared to Gaussian process-based BO, and the paper on Position: The Future of Bayesian Prediction Is Prior-Fitted, which argues that prior-fitted networks represent the future of Bayesian inference. The paper on Model Informed Flows for Bayesian Inference of Probabilistic Programs is also noteworthy, as it introduces a new architecture that adds a translation mechanism, prior information, and hierarchical ordering to deliver state-of-the-art performance.