The field of natural language processing is witnessing significant developments in entity recognition and linking, driven by the increasing capabilities of large language models (LLMs). Researchers are exploring innovative approaches to improve the accuracy and efficiency of entity recognition, linking, and normalization tasks. A key trend is the use of LLMs to augment traditional methods, enabling the development of more effective and lightweight models. Another important direction is the investigation of how LLMs make predictions and the identification of biases that influence their performance. Noteworthy papers in this area include: Knowing the Facts but Choosing the Shortcut, which investigates the reliance of LLMs on genuine knowledge versus superficial heuristics in entity comparison tasks. PANER, which presents a paraphrase-augmented framework for low-resource named entity recognition, achieving state-of-the-art performance on few-shot and zero-shot tasks. ToMMeR, which introduces a lightweight model for efficient entity mention detection from LLMs, achieving high recall and precision on multiple benchmarks.