Diffusion Models and Optimal Transport in Generative Learning

The field of generative learning is witnessing a significant shift towards the adoption of diffusion models and optimal transport techniques. Recent research has focused on reinterpreting diffusion models through the lens of Wasserstein Gradient Flow, providing a more principled and elegant framework for understanding these models. Additionally, the development of differentiable Expectation-Maximisation algorithms has enabled the integration of optimal transport distances into modern learning pipelines. These advances have far-reaching implications for applications in image and video generation, finance, and reinforcement learning. Noteworthy papers in this area include: Are We Really Learning the Score Function, which challenges the conventional understanding of diffusion models, and Differentiable Expectation-Maximisation and Applications to Gaussian Mixture Model Optimal Transport, which introduces a novel approach to computing optimal transport distances. Furthermore, papers such as Coefficients-Preserving Sampling for Reinforcement Learning with Flow Matching and BranchGRPO: Stable and Efficient GRPO with Structured Branching in Diffusion Models have made significant contributions to the development of more efficient and stable reinforcement learning algorithms for diffusion models.

Sources

Are We Really Learning the Score Function? Reinterpreting Diffusion Models Through Wasserstein Gradient Flow Matching

Differentiable Expectation-Maximisation and Applications to Gaussian Mixture Model Optimal Transport

Finance-Grounded Optimization For Algorithmic Trading

Coefficients-Preserving Sampling for Reinforcement Learning with Flow Matching

BranchGRPO: Stable and Efficient GRPO with Structured Branching in Diffusion Models

Nested Optimal Transport Distances

Data-driven generative simulation of SDEs using diffusion models

Built with on top of