The field of code generation and understanding is rapidly advancing with the help of Large Language Models (LLMs). Recent studies have focused on improving the performance of LLMs in code generation tasks, such as code completion, code summarization, and code execution. One of the key challenges in this area is the lack of robust semantics understanding in LLMs, which can lead to generated code that is not correct or efficient. To address this challenge, researchers have proposed various approaches, including the use of formal semantics, retrieval-augmented generation, and equivalence scores for evaluating the quality of generated code. Another important area of research is the investigation of code smells in LLM-generated code, which can help identify areas for improvement in code quality. Furthermore, studies have shown that LLMs can be used to generate high-level test cases from requirements, and to predict the relative comprehensibility of code snippets. Noteworthy papers in this area include GramTrans, which proposes a novel approach to code representation that improves the performance of LLMs in code generation tasks, and VeriEquivBench, which introduces a new benchmark for evaluating the quality of formally verifiable code generated by LLMs. Overall, the field of code generation and understanding with LLMs is rapidly evolving, with new approaches and techniques being proposed to improve the performance and quality of generated code.