The field of large language models (LLMs) is moving towards addressing the long-standing issue of hallucinations, which refers to the generation of plausible yet factually incorrect information. Recent research has focused on developing innovative methods to mitigate hallucinations, including fine-tuning strategies, prompt refinement techniques, and uncertainty quantification approaches. These advancements aim to improve the reliability and trustworthiness of LLMs, particularly in high-stakes domains such as medicine and finance. Noteworthy papers in this area include the introduction of Curative Prompt Refinement (CPR), which significantly increases the quality of generation while mitigating hallucination, and the proposal of the Credal Transformer, which integrates uncertainty quantification directly into the model architecture. Additionally, research on uncertainty quantification has led to the development of novel methods, such as Retrieval-Augmented Reasoning Consistency (R2C) and Epistemic Uncertainty Quantification via Semantic-preserving Intervention (ESI), which provide more accurate estimates of model uncertainty.
Advances in Hallucination Mitigation and Uncertainty Quantification for Large Language Models
Sources
Uncertainty Quantification for Hallucination Detection in Large Language Models: Foundations, Methodology, and Future Directions
Credal Transformer: A Principled Approach for Quantifying and Mitigating Hallucinations in Large Language Models
COSTAR-A: A prompting framework for enhancing Large Language Model performance on Point-of-View questions
ESI: Epistemic Uncertainty Quantification via Semantic-preserving Intervention for Large Language Models