The field of image and color processing is witnessing significant developments, with a focus on improving the accuracy and control of style transfer, colorization, and image generation. Researchers are exploring new approaches to apply style features exclusively to specific regions of interest, and to provide comprehensive control over color schemes in images. The integration of deep learning techniques, such as diffusion models and large language models, is enabling more refined and controlled image processing. Noteworthy papers in this area include: Improving Masked Style Transfer using Blended Partial Convolution, which proposes a partial-convolution-based style transfer network for accurate style feature application. Exploring Palette based Color Guidance in Diffusion Models, which introduces a novel approach to enhance color scheme control by integrating color palettes as a separate guidance mechanism. ColorGPT, which leverages large language models for multimodal color recommendation, outperforming existing methods in terms of color suggestion accuracy. MangaDiT, which presents a powerful model for reference-guided line art colorization based on Diffusion Transformers, achieving superior performance in both qualitative and quantitative evaluations. ToonComposer, which streamlines cartoon production with generative post-keyframing, reducing manual workload and improving flexibility.