The field of AI-driven research and development is rapidly evolving, with a focus on improving the efficiency, reproducibility, and transparency of scientific experiments and software development. Recent studies have highlighted the importance of standardized evaluation protocols, robust benchmarking, and systematic approaches to integrating domain knowledge into requirements engineering. Notably, innovative frameworks and tools, such as data-centric infrastructures and semantic-aware digital twins, are being developed to support these efforts. Meanwhile, researchers are exploring the application of agile management methodologies to machine learning-enabled systems, aiming to address the unique challenges posed by these dynamic and rapidly changing environments. Overall, the field is moving towards greater emphasis on rigor, reproducibility, and collaboration, with AI-driven tools and approaches playing an increasingly crucial role in accelerating scientific progress. Noteworthy papers include OAgents, which introduced a new foundation agent framework that achieves state-of-the-art performance, and Doc2Agent, a scalable pipeline for building tool-using agents from API documentation. Additionally, the AutoExperiment benchmark was proposed to evaluate AI agents' ability to implement and run machine learning experiments based on natural language descriptions, and the Define-ML framework was presented as a systematic approach to ideating machine learning-enabled systems.