The field of 3D human representation and animation is rapidly evolving, with a focus on developing more efficient and generalizable models. Recent research has emphasized the importance of capturing the relationships between different parts of the human body, such as the face and hair, to create more realistic and animatable avatars. Another key area of development is the use of knowledge-based global guidance and dynamic pose masking to ensure the accuracy and quality of generated human images. Additionally, there is a growing interest in skeleton-agnostic motion synthesis and privacy-preserving photorealistic self-avatars in mixed reality. Noteworthy papers include: HairCUP, which presents a universal prior model for 3D head avatars with explicit hair compositionality, enabling seamless transfer of face and hair components between avatars. PUMPS, which proposes a primordial autoencoder architecture for temporal point cloud data, allowing for efficient and effective motion synthesis and prediction. MoGA, which reconstructs high-fidelity 3D Gaussian avatars from a single-view image using a generative avatar model and 3D appearance and geometric constraints.