The field of medical image synthesis is rapidly advancing with the development of diffusion models. These models have shown remarkable success in generating high-quality synthetic images, which can be used for various applications such as data augmentation, counterfactual generation, and disease progression modeling. Recent research has focused on improving the conditioning faithfulness and image quality of diffusion models, with techniques such as cycle training and prompt tuning being explored. Additionally, there is a growing interest in developing diffusion models that can generate images without the need for contrast agents, which can reduce the risks associated with their use. Noteworthy papers in this area include: Automated Prompt Generation for Creative and Counterfactual Text-to-image Synthesis, which proposes an automatic prompt engineering framework for counterfactual image generation. Tunable-Generalization Diffusion Powered by Self-Supervised Contextual Sub-Data for Low-Dose CT Reconstruction, which presents a novel method for low-dose CT reconstruction using self-supervised contextual sub-data. Causal-Adapter: Taming Text-to-Image Diffusion for Faithful Counterfactual Generation, which introduces a modular framework for adapting frozen text-to-image diffusion backbones for counterfactual image generation. AortaDiff: A Unified Multitask Diffusion Framework For Contrast-Free AAA Imaging, which proposes a unified deep learning framework for generating synthetic contrast-enhanced CT images from non-contrast CT scans while simultaneously segmenting the aortic lumen and thrombus.