The field of generative modeling and physics-aware simulation is rapidly advancing, with a focus on developing innovative methods for generating realistic and physically plausible data. Recent research has explored the use of latent diffusion models, variational autoencoders, and masked autoencoders to improve the quality and diversity of generated images and videos. Additionally, there is a growing interest in incorporating physical properties and constraints into generative models, such as incompressibility and compressibility, to enable more realistic simulations. Noteworthy papers in this area include AlphaVAE, which proposes a unified end-to-end RGBA image reconstruction and generation method, and SketchDNN, which introduces a generative model for synthesizing CAD sketches using a unified continuous-discrete diffusion process. These advancements have the potential to revolutionize various fields, including computer vision, graphics, and engineering.