The field of software development is witnessing significant advancements with the integration of Large Language Models (LLMs). Recent research has focused on improving the security and efficiency of LLM-based systems, particularly in code generation, issue localization, and performance optimization. Notably, the development of novel approaches such as RepoLens, SecureAgentBench, and SemGuard has shown promising results in addressing concern mixing and scattering in large-scale repositories, evaluating secure code generation, and correcting semantic errors in LLM-generated code. Furthermore, the introduction of benchmarks like PerfBench, BuildBench, and MULocBench has facilitated the evaluation of LLM agents' capabilities in performance optimization, compiling real-world open-source software, and localizing code and non-code issues. Additionally, research on explainable fault localization, environment setup, and security assessment of AI code agents has highlighted the importance of developing more robust and reliable systems. Overall, the field is moving towards more secure, efficient, and transparent software development practices with the help of LLMs. Noteworthy papers include RepoLens, which improves issue localization by abstracting and leveraging conceptual knowledge from code repositories, and SemGuard, which performs real-time semantic supervision to correct semantic errors in LLM-generated code.