Advances in Clinical Natural Language Processing

The field of clinical natural language processing is rapidly advancing, with a focus on developing innovative solutions for extracting structured information from unstructured clinical text. Recent research has highlighted the potential of large language models (LLMs) in achieving state-of-the-art performance in various clinical NLP tasks, including named entity recognition, question answering, and text classification. Notably, LLMs have been shown to be effective in identifying adherence to clinical reporting guidelines, extracting medical insights from electronic health records, and extending medical ontologies from clinical notes. However, challenges remain, such as addressing the limitations of traditional evaluation metrics, ensuring patient privacy, and developing more equitable and culturally-aware medical technologies. Some noteworthy papers in this area include Evaluating Open-Weight Large Language Models for Structured Data Extraction from Narrative Medical Reports, which demonstrated the effectiveness of LLMs in extracting structured data from clinical reports across multiple languages and institutions. Additionally, MedPath introduced a large-scale biomedical entity linking dataset that enables the development of more accurate and interpretable clinical NLP models.

Sources

Evaluating Open-Weight Large Language Models for Structured Data Extraction from Narrative Medical Reports Across Multiple Use Cases and Languages

MedPath: Multi-Domain Cross-Vocabulary Hierarchical Paths for Biomedical Entity Linking

Identifying Imaging Follow-Up in Radiology Reports: A Comparative Analysis of Traditional ML and LLM Approaches

MedPT: A Massive Medical Question Answering Dataset for Brazilian-Portuguese Speakers

LLM4SCREENLIT: Recommendations on Assessing the Performance of Large Language Models for Screening Literature in Systematic Reviews

Evaluating the Ability of Large Language Models to Identify Adherence to CONSORT Reporting Guidelines in Randomized Controlled Trials: A Methodological Evaluation Study

OEMA: Ontology-Enhanced Multi-Agent Collaboration Framework for Zero-Shot Clinical Named Entity Recognition

HEAD-QA v2: Expanding a Healthcare Benchmark for Reasoning

Balancing Natural Language Processing Accuracy and Normalisation in Extracting Medical Insights

Utilizing Large Language Models for Zero-Shot Medical Ontology Extension from Clinical Notes

Built with on top of