The field of software engineering is witnessing significant advancements in code generation and review, driven by the increasing capabilities of large language models. Recent research has focused on developing benchmarks and evaluation frameworks to assess the performance of these models in various programming languages and tasks. Additionally, there is a growing interest in improving code review automation, with techniques such as retrieval-augmented generation showing promising results. Noteworthy papers in this area include UA-Code-Bench, which introduces a benchmark for evaluating language models' code generation capabilities in Ukrainian, and CodeMapper, which presents a language-agnostic approach to mapping code regions across commits. Other notable works include SWE-Compass, a comprehensive benchmark for evaluating large language models in software engineering tasks, and UI2Code$^N$, a visual language model for test-time scalable interactive UI-to-code generation. These advancements have the potential to significantly improve the efficiency and quality of software development, and are likely to have a major impact on the field in the coming years.