The field of information retrieval is moving towards leveraging large language models (LLMs) to improve ranking and retrieval performance. Recent developments have focused on addressing the tradeoff between computational efficiency and ranking accuracy, with approaches such as comparative ranking and query reasoning gaining attention. Additionally, personalization is becoming increasingly important, with methods that incorporate user interactions and consultation values being explored. These advances have the potential to significantly improve the effectiveness of information retrieval systems and enable more complex tasks such as object search and personalized search. Noteworthy papers include: Leveraging Reference Documents for Zero-Shot Ranking via Large Language Models, which proposes a simple and effective comparative ranking method that reduces computational cost while preserving the advantages of comparative evaluation. TongSearch-QR: Reinforced Query Reasoning for Retrieval, which introduces a family of small-scale language models for query reasoning and rewriting that achieve performance rivaling large-scale language models without their prohibitive inference costs. InsertRank: LLMs can reason over BM25 scores to Improve Listwise Reranking, which demonstrates improved retrieval effectiveness by leveraging lexical signals like BM25 scores during reranking. Enhancing Object Search in Indoor Spaces via Personalized Object-factored Ontologies, which proposes a novel framework that enables robots to deduce personalized ontologies of indoor environments and improves performance in multi-object search tasks. Similarity = Value? Consultation Value Assessment and Alignment for Personalized Search, which proposes a consultation value assessment framework that evaluates historical consultations from novel perspectives and introduces a value-aware personalized search model that selectively incorporates high-value consultations.