The field of natural language processing and speech recognition is witnessing significant developments, with a focus on improving the accuracy and efficiency of various tasks such as semantic role labeling, information extraction, and automatic speech recognition. Large language models are being increasingly used to advance the field, with techniques such as fine-tuning and retrieval-augmented generation being explored to improve their performance. Additionally, there is a growing emphasis on addressing pitfalls in auditing practices and developing more robust and standardized approaches to ensure high-quality automated transcriptions. Noteworthy papers in this area include: LLMs Can Also Do Well, which achieves state-of-the-art performance in semantic role labeling by equipping large language models with retrieval-augmented generation and self-correction mechanisms. The impact of fine tuning in LLaMA on hallucinations for named entity extraction in legal documentation, which demonstrates the effectiveness of fine-tuning large language models to reduce hallucinations and improve accuracy in named entity extraction. Step-by-step Instructions and a Simple Tabular Output Format Improve the Dependency Parsing Accuracy of LLMs, which proposes a novel step-by-step instruction strategy and simplified output format to improve the dependency parsing accuracy of large language models.
Advances in Natural Language Processing and Speech Recognition
Sources
Auto Review: Second Stage Error Detection for Highly Accurate Information Extraction from Phone Conversations
The impact of fine tuning in LLaMA on hallucinations for named entity extraction in legal documentation
Addressing Pitfalls in Auditing Practices of Automatic Speech Recognition Technologies: A Case Study of People with Aphasia