The field of computer graphics is moving towards more realistic and efficient generation of lighting and materials. Recent developments have focused on leveraging generative models, diffusion-based methods, and reinforcement learning to improve the realism and diversity of generated materials and lighting effects. Notable advancements include the use of text-to-image models and datasets of realistic material photos to generate high-quality materials, as well as the application of foundational diffusion models to control lighting in generated images. These innovations have the potential to make 3D content authoring more accessible and improve the overall visual fidelity of computer-generated images. Noteworthy papers include: RealMat, which proposes a diffusion-based material generator that leverages realistic priors and reinforcement learning to generate realistic materials. PractiLight, which demonstrates state-of-the-art performance in terms of quality and control for light control in generated images. LuxDiT, which proposes a novel data-driven approach to estimate scene lighting from a single image or video. Neural Bloom, which generates bloom lighting effects in real time using neural networks, outperforming traditional techniques in terms of speed and quality.