The field of natural language processing is witnessing significant advancements in the understanding and generation of code with the help of large language models (LLMs). Recent studies have shown that LLMs can be fine-tuned to improve their reasoning capabilities on code-related tasks, such as math and code completion. The use of pseudocode and flowcharts as intermediate representations has been found to be effective in enhancing the translation of code between different programming languages. Furthermore, researchers have identified syntactic blind spots in LLMs, which can lead to mathematical errors due to misalignment between surface form and internal representation. Noteworthy papers in this area include: On Code-Induced Reasoning in LLMs, which investigates the impact of code structure and semantics on LLM reasoning capabilities. Regression Language Models for Code, which proposes a unified model for predicting numeric outcomes of code executions across multiple programming languages. On Effective Semantic Translation for Code, which explores the use of pseudocode-based translation to improve code translation accuracy. Syntactic Blind Spots, which identifies a systematic failure mode in LLMs due to syntactic misalignment.