The field of scientific research is undergoing a significant transformation with the integration of large language models (LLMs). These models are being used to automate various tasks, such as literature mining, predictive modeling, and experiment design, leading to increased efficiency and effectiveness in the research process. The use of LLMs is also enabling the generation of new ideas and hypotheses, and is facilitating collaboration between humans and AI systems. Notably, open-source LLMs are being developed to match the performance of closed-source commercial models, offering greater transparency, reproducibility, and cost-effectiveness. Overall, the adoption of LLMs is poised to revolutionize the way scientific research is conducted, and is likely to have a major impact on various fields, including materials science, quantum physics, and biology. Some noteworthy papers in this area include: AIonopedia, which introduces an LLM agent for ionic liquid discovery, and AI-Mandel, which presents an LLM agent that can generate and implement ideas in quantum physics. Additionally, Project Rachel explores the possibility of AI becoming a scholarly author, and Early science acceleration experiments with GPT-5 demonstrates the potential of AI models to accelerate scientific progress.