Advances in Flow-Based Generative Models

The field of generative models is moving towards faster and more efficient sampling methods, with a focus on flow-based models. Recent research has shown that flow matching can be a promising alternative to diffusion-based models, offering faster sampling and simpler training. Theoretical understanding of flow matching is also improving, with new analysis of sample complexity and the development of more efficient training objectives. Furthermore, techniques such as joint distillation and risk-sensitive loss functions are being explored to improve the performance of flow-based models. Noteworthy papers include: Improved Mean Flows, which achieves state-of-the-art results on ImageNet 256x256 with a single function evaluation, and ReflexFlow, which proposes a simple and effective reflexive refinement of the Flow Matching learning objective to alleviate exposure bias. SimFlow is also notable for its simplified and end-to-end training of latent normalizing flows, achieving a new state of the art on the ImageNet 256x256 generation task.

Sources

Generative Modeling with Continuous Flows: Sample Complexity of Flow Matching

Improved Mean Flows: On the Challenges of Fastforward Generative Models

Joint Distillation for Fast Likelihood Evaluation and Sampling in Flow-based Models

From Navigation to Refinement: Revealing the Two-Stage Nature of Flow-based Diffusion Models through Oracle Velocity

Risk-Entropic Flow Matching

Fast & Efficient Normalizing Flows and Applications of Image Generative Models

SimFlow: Simplified and End-to-End Training of Latent Normalizing Flows

ReflexFlow: Rethinking Learning Objective for Exposure Bias Alleviation in Flow Matching

Built with on top of