The field of generative models is moving towards faster and more efficient sampling methods, with a focus on flow-based models. Recent research has shown that flow matching can be a promising alternative to diffusion-based models, offering faster sampling and simpler training. Theoretical understanding of flow matching is also improving, with new analysis of sample complexity and the development of more efficient training objectives. Furthermore, techniques such as joint distillation and risk-sensitive loss functions are being explored to improve the performance of flow-based models. Noteworthy papers include: Improved Mean Flows, which achieves state-of-the-art results on ImageNet 256x256 with a single function evaluation, and ReflexFlow, which proposes a simple and effective reflexive refinement of the Flow Matching learning objective to alleviate exposure bias. SimFlow is also notable for its simplified and end-to-end training of latent normalizing flows, achieving a new state of the art on the ImageNet 256x256 generation task.