The field of scientific discovery is rapidly advancing with the integration of artificial intelligence (AI) and machine learning (ML) techniques. Recent developments have focused on improving the accuracy and efficiency of AI models in various scientific applications, including materials discovery, biomedical natural language processing, and scientific literature understanding. A key trend is the use of large language models (LLMs) to facilitate knowledge integration and automation in scientific tasks, such as symbolic regression and software development. These models have shown promising results in improving the robustness and reliability of scientific discoveries. Notably, the incorporation of domain-specific knowledge and constraints has been shown to enhance the performance of LLMs in scientific applications. Overall, the field is moving towards the development of more sophisticated AI systems that can efficiently synthesize knowledge and accelerate scientific progress. Noteworthy papers include: Aligning Reasoning LLMs for Materials Discovery with Physics-aware Rejection Sampling, which introduces a novel training scheme to improve the accuracy and calibration of LLMs in materials discovery. Knowledge Integration for Physics-informed Symbolic Regression Using Pre-trained Large Language Models, which leverages pre-trained LLMs to automate the incorporation of domain knowledge in symbolic regression. SciGPT: A Large Language Model for Scientific Literature Understanding and Knowledge Discovery, which presents a domain-adapted foundation model for scientific literature understanding and knowledge discovery.