The field of image generation is rapidly advancing with the development of new generative models that offer improved performance and efficiency. Recent research has focused on combining different approaches, such as autoregressive models and diffusion models, to create more powerful and flexible models. These models are capable of generating high-quality images that are comparable to those produced by state-of-the-art models. Additionally, there is a growing interest in developing models that can perform multiple tasks, such as image generation, segmentation, and classification, within a single framework. Noteworthy papers in this area include STARFlow, which demonstrates the effectiveness of normalizing flows for high-resolution image synthesis, and TransDiff, which combines autoregressive transformers with diffusion models to achieve state-of-the-art performance on image generation tasks. Symmetrical Flow Matching is also a notable approach that unifies semantic segmentation, classification, and image generation within a single model.