The field of multimodal models is rapidly advancing, with a focus on front-end engineering, recommendation systems, and unified perception and generation. Recent developments have led to the creation of comprehensive benchmarks, such as DesignBench, which evaluate multimodal large language models (MLLMs) on various front-end tasks, including generation, editing, and repair. Notable papers in this area include DesignBench, which provides a multi-framework, multi-task evaluation benchmark for assessing MLLMs' capabilities in automated front-end engineering. Ming-Omni is a unified multimodal model that can process images, text, audio, and video, demonstrating strong proficiency in both speech and image generation. Pisces is an auto-regressive multimodal foundation model that addresses the challenge of developing unified models for image understanding and generation through a novel decoupled visual encoding architecture and tailored training techniques.