The field of clinical natural language processing is moving towards leveraging large language models (LLMs) and entity-aware retrieval methods to improve performance in tasks such as named entity recognition, coreference resolution, and question answering. Researchers are exploring the strengths and limitations of different approaches, including supervised fine-tuning, in-context learning, and prompting experiments. The use of domain-specific cues and entity dictionaries is being investigated to enhance the accuracy and efficiency of LLMs in biomedical NLP tasks. Notably, entity-aware retrieval methods have shown promise in improving semantic question answering in electronic health records. Some noteworthy papers include: BioCoref, which demonstrates the potential of lightweight prompt engineering for enhancing LLM utility in biomedical NLP tasks. Beyond Long Context, which introduces the Clinical Entity Augmented Retrieval (CLEAR) method and achieves improved performance in semantic question answering with reduced token usage.