The field of code comprehension and generation is moving towards more efficient and effective methods for understanding and generating code. Recent research has focused on incorporating additional context into neural code representation, such as version history and structural relationships, to improve performance on key comprehension tasks. This has led to significant gains in code clone detection and summarization. Another area of research has been on developing more efficient and lightweight methods for code completion and review, such as using keyword-search instead of semantic search and leveraging repository-level pretraining. Noteworthy papers include: Grounded AI for Code Review, which presents a production system for grounded code review that achieves sub-minute median first-feedback while maintaining competitive violation reduction. RepoSummary, which proposes a feature-oriented code repository summarization approach that generates repository documentation automatically and establishes accurate traceability links. Enhancing Neural Code Representation with Additional Context, which shows that enriching code representations with contextual signals improves neural model performance on key comprehension tasks. SpareCodeSearch, which demonstrates that using keyword-search is sufficient to retrieve relevant code context without extensive GPU resources. On Pretraining for Project-Level Code Completion, which investigates the effect of different repository-processing strategies on in-context learning and achieves comparable performance on the Long Code Arena benchmark with a smaller dataset.