The field of scientific information retrieval and evaluation is undergoing significant developments, driven by the need for more effective and efficient ways to manage and assess the rapidly growing volume of scientific literature. Researchers are exploring innovative approaches to improve the accuracy and relevance of search results, as well as the evaluation of research impact and quality. Notably, there is a growing focus on integrating verification feedback into document ranking and retrieval, as well as the development of more nuanced and contextualized methods for citation analysis and research evaluation. Additionally, the application of artificial intelligence and machine learning techniques, such as large language models and sparse representation, is becoming increasingly prevalent in scientific information retrieval and evaluation. These advancements have the potential to significantly enhance the validity and reliability of research assessments, and to support more informed decision-making in the scientific community. Noteworthy papers in this area include: PaperRegister, which proposes a hierarchical indexing approach for flexible-grained paper search, and +VeriRel, which integrates verification feedback into document ranking for scientific fact checking. CASPER, a sparse retrieval model for scientific search, and the statistical validation of the Innovation Lens, which predicts high-citation research papers, are also notable contributions.