The field of software development is witnessing a significant shift towards intelligent coding systems that can generate code and provide justifications for their decisions. This trend is driven by the need for more transparent and trustworthy AI-driven coding systems. Recent research has focused on developing neuro-symbolic approaches for justification generation, which can provide clear and consistent explanations for the code generated by these systems. Another area of research is the application of formal methods to ensure the correctness and robustness of software systems. This includes the use of large language models (LLMs) for automated formalization, verification, and debugging of code. Notably, LLMs are being used to translate code from one language to another, such as from C to Rust, and to verify the correctness of distributed deep learning models. Furthermore, researchers are exploring the use of constrained decoding methods to ensure that LLMs generate code that adheres to specific formal languages and syntactic constraints. Overall, these advances are paving the way for more reliable, efficient, and trustworthy software development. Noteworthy papers include: Automated Formalization via Conceptual Retrieval-Augmented LLMs, which proposes a framework for automated formalization using LLMs. FormalGrad, which introduces a principled framework that integrates formal methods with LLM-based code generation. Constrained Decoding of Diffusion LLMs with Context-Free Grammars, which presents a method for constrained decoding of diffusion LLMs.