Addressing Hallucinations in Large Language Models

The field of large language models (LLMs) is currently focused on addressing the issue of hallucinations, where models produce outputs that contradict explicit source evidence due to their reliance on pre-trained knowledge. Researchers are exploring various approaches to mitigate this problem, including the development of new methodologies for assessing reliability, the use of frequency-framed prompting to enhance fairness, and the creation of modular pipelines for summarization tasks. These innovations aim to improve the control, faithfulness, and personalization of LLMs. Noteworthy papers in this area include:

  • A survey of taxonomy, methods, and directions for addressing hallucinations in LLM-based agents, which provides a comprehensive overview of the current state of research.
  • A study on the knowledge-behaviour disconnect in LLM-based chatbots, which highlights a fundamental limitation in the capacities of LLMs and its implications for hallucinations.

Sources

Knowledge-Driven Hallucination in Large Language Models: An Empirical Study on Process Modeling

REFER: Mitigating Bias in Opinion Summarisation via Frequency Framed Prompting

Re-FRAME the Meeting Summarization SCOPE: Fact-Based Summarization and Personalization via Questions

LLM-based Agents Suffer from Hallucinations: A Survey of Taxonomy, Methods, and Directions

The Knowledge-Behaviour Disconnect in LLM-based Chatbots

Instruction Boundary: Quantifying Biases in LLM Reasoning under Various Coverage

Built with on top of