The field of chemical synthesis and design is moving towards more interpretable and explainable models. Recent developments have focused on combining the strengths of large language models and specialized models to improve the reasoning and decision-making capabilities of these models. This has led to the development of frameworks that can provide natural language explanations for their predictions and optimize chemical synthesis routes under various constraints. The use of reinforcement learning and chain-of-thought supervised fine-tuning has also been explored to improve the performance and interpretability of these models. Notable papers in this area include: Retro-Expert, which proposes a collaborative reasoning framework for interpretable retrosynthesis. LARC, which presents an agentic framework for constrained retrosynthesis planning that achieves human-level success rates. PepThink-R1, which introduces a generative framework for interpretable cyclic peptide optimization using large language models and reinforcement learning. LEAD, which proposes a sequence-structure co-design framework that optimizes both sequence and structure within their shared latent space.