The field of automated scientific writing and review is moving towards more sophisticated and nuanced approaches. Researchers are developing innovative methods to improve the quality and accuracy of automated essay scoring, related work generation, and peer review. These advancements aim to address the limitations of existing methods, such as the inability to capture topic-specific features and the lack of transparency in evaluation criteria. Notably, the use of multi-agent AI workflows, reinforcement learning, and large language models is becoming increasingly prominent in this field. These techniques enable the automation of complex tasks, such as materials characterization and novelty assessment, and have the potential to enhance the efficiency and rigor of scientific research. Noteworthy papers in this area include: Operationalizing Serendipity: Multi-Agent AI Workflows for Enhanced Materials Characterization with Theory-in-the-Loop, which introduces a framework for operationalizing serendipity in materials research. ReviewRL: Towards Automated Scientific Review with RL, which presents a reinforcement learning framework for generating comprehensive and factually grounded scientific paper reviews. Beyond Not Novel Enough: Enriching Scholarly Critique with LLM-Assisted Feedback, which proposes a structured approach for automated novelty evaluation that models expert reviewer behavior.