The field of cross-lingual information retrieval is moving towards more effective and efficient methods for retrieving relevant documents across different languages. Recent research has focused on developing innovative approaches to address the challenges of aligning representations of different languages in a shared vector space. Notably, there is a growing interest in using large language models and multilingual bi-encoders to improve retrieval performance. Additionally, researchers are exploring new methods for boosting data utilization and developing lightweight pipelines for entity linking tasks. Overall, the field is advancing towards more robust and accurate cross-lingual information retrieval systems. Noteworthy papers include: ViRanker, which achieves strong early-rank accuracy for Vietnamese language retrieval. Boosting Data Utilization for Multilingual Dense Retrieval proposes a method to obtain high-quality hard negative samples and effective mini-batch data, outperforming existing strong baselines. BIBERT-Pipe presents a lightweight pipeline for biomedical nested named entity linking, demonstrating the effectiveness of minimal yet principled modifications. Evaluating Large Language Models for Cross-Lingual Retrieval reveals that multilingual bi-encoders and pairwise rerankers based on instruction-tuned LLMs can achieve further gains in cross-lingual retrieval performance.