The field of computer vision is moving towards more accurate and efficient methods for geometric perception and texture generation. Recent developments have focused on improving the consistency and coherence of generated textures across different views and poses. Diffusion-based models have shown great promise in this area, with the ability to adapt to various tasks such as depth, normal, and matting estimation. Furthermore, the incorporation of illumination context and geometry-calibrated attention has led to significant improvements in texture generation quality. Noteworthy papers include Edit2Perceive, which introduces a unified diffusion framework for dense perception tasks, and LumiTex, which proposes an end-to-end framework for high-fidelity PBR texture generation with illumination context. Additionally, CaliTex and FaithFusion have made significant contributions to view-coherent 3D texture generation and controllable driving-scene reconstruction, respectively.