The field of research is witnessing a significant shift towards leveraging large language models (LLMs) to enhance efficiency and integrity. One of the primary concerns is the potential threat of AI-generated survey papers flooding preprint platforms, overwhelming researchers, and eroding trust in the scientific record. To mitigate this, there is a growing emphasis on instituting strong norms for AI-assisted review writing, restoring expert oversight, and developing new infrastructures such as dynamic live surveys. LLMs are also being explored for their potential in automating methodological assessments, with promising results in identifying explicit methodological features. However, they require human oversight for nuanced interpretations. Furthermore, LLMs are being used to accelerate and enhance systematic literature reviews, with approaches such as semi-automatic corpus filtration showing significant reductions in manual effort and lower error rates. Noteworthy papers in this area include: Leveraging LLMs for Semi-Automatic Corpus Filtration in Systematic Literature Reviews, which proposes a pipeline for classifying papers based on descriptive prompts and deciding jointly using a consensus scheme. LLM-REVal, which highlights the risks and equity concerns posed to human authors and academic research if LLMs are deployed in the peer review cycle without adequate caution.