The field of generative modeling is moving towards a deeper understanding of the geometric structure of data, with a focus on preserving this structure in the latent space. Recent work has highlighted the importance of considering both global and local geometric properties, as well as the role of inductive biases in shaping the behavior of generative models. The development of new architectures and techniques, such as asymmetric autoencoders and selective underfitting, is enabling more accurate and robust modeling of complex data distributions. Notable papers in this area include: Multi-Scale Geometric Autoencoder, which introduces an asymmetric architecture for preserving geometric structure. Selective Underfitting in Diffusion Models, which refines our understanding of how diffusion models learn and generate samples. Diffusion Models and the Manifold Hypothesis, which provides evidence for the manifold hypothesis and explores the role of implicit regularization. Robust Tangent Space Estimation via Laplacian Eigenvector Gradient Orthogonalization, which proposes a spectral method for estimating tangent spaces in high-noise settings.