The field of generative recommendation is experiencing significant growth, with a focus on developing more efficient and personalized models. Recent research has highlighted the importance of tokenization and generation in these models, with a shift towards more unified and bidirectional approaches. The use of diffusion-based models and disentangled steering methods has also shown promise in improving recommendation performance. Notable papers in this area include: BLOGER, which proposes a bi-level optimization framework for generative recommendation, Pctx, which introduces a personalized context-aware tokenizer, DiffGRM, which replaces autoregressive decoders with masked discrete diffusion models, SteerX, which proposes a disentangled steering method for LLM personalization, GReF, which introduces a unified generative framework for efficient reranking, MMQ-v2, which proposes a mixture-of-quantization framework for adaptive behavior mining, Modular Linear Tokenization, which introduces a reversible and deterministic technique for encoding high-cardinality categorical identifiers. These papers demonstrate the innovative and advancing work being done in the field of generative recommendation, with a focus on improving performance, efficiency, and personalization.