The field of 3D human avatar reconstruction and animation is rapidly advancing, with a focus on improving the accuracy, efficiency, and controllability of these models. Recent developments have centered on enhancing the expressiveness and realism of avatars, particularly in terms of facial expressions, hair, and clothing. Researchers are exploring new approaches to address long-standing challenges, such as reconstructing high-quality avatars from limited input data, handling complex movements and interactions, and ensuring temporal consistency in online reconstruction. Noteworthy papers in this area have introduced innovative methods for animatable 3D avatar reconstruction from a single image, geometry-aware texture generation for 3D head modeling, and unsupervised online video stitching with spatiotemporal bidirectional warps. These advancements have significant implications for applications in entertainment, advertising, and virtual reality. Notable papers include: SVAD, which generates synthetic training data through video diffusion and enhances it with identity preservation and image restoration modules to train 3DGS avatars. SignSplat, which leverages regularization techniques on Gaussian parameters to mitigate overfitting and rendering artifacts, and proposes a new adaptive control method to densify Gaussians and prune splat points on the mesh surface. GUAVA, which introduces an expressive human model to enhance facial expression capabilities and develops an accurate tracking method for fast animatable upper-body 3D Gaussian avatar reconstruction.