Advances in Entity Recognition and Linking with Large Language Models

The field of natural language processing is witnessing significant developments in entity recognition and linking, driven by the increasing capabilities of large language models (LLMs). Researchers are exploring innovative approaches to improve the accuracy and efficiency of entity recognition, linking, and normalization tasks. A key trend is the use of LLMs to augment traditional methods, enabling the development of more effective and lightweight models. Another important direction is the investigation of how LLMs make predictions and the identification of biases that influence their performance. Noteworthy papers in this area include: Knowing the Facts but Choosing the Shortcut, which investigates the reliance of LLMs on genuine knowledge versus superficial heuristics in entity comparison tasks. PANER, which presents a paraphrase-augmented framework for low-resource named entity recognition, achieving state-of-the-art performance on few-shot and zero-shot tasks. ToMMeR, which introduces a lightweight model for efficient entity mention detection from LLMs, achieving high recall and precision on multiple benchmarks.

Sources

Knowing the Facts but Choosing the Shortcut: Understanding How Large Language Models Compare Entities

PANER: A Paraphrase-Augmented Framework for Low-Resource Named Entity Recognition

Contextual Augmentation for Entity Linking using Large Language Models

From Memorization to Generalization: Fine-Tuning Large Language Models for Biomedical Term-to-Identifier Normalization

ToMMeR -- Efficient Entity Mention Detection from Large Language Models

Leveraging the Power of Large Language Models in Entity Linking via Adaptive Routing and Targeted Reasoning

Built with on top of