The field of large language models (LLMs) is currently focused on addressing the issue of hallucinations, where models produce outputs that contradict explicit source evidence due to their reliance on pre-trained knowledge. Researchers are exploring various approaches to mitigate this problem, including the development of new methodologies for assessing reliability, the use of frequency-framed prompting to enhance fairness, and the creation of modular pipelines for summarization tasks. These innovations aim to improve the control, faithfulness, and personalization of LLMs. Noteworthy papers in this area include:
- A survey of taxonomy, methods, and directions for addressing hallucinations in LLM-based agents, which provides a comprehensive overview of the current state of research.
- A study on the knowledge-behaviour disconnect in LLM-based chatbots, which highlights a fundamental limitation in the capacities of LLMs and its implications for hallucinations.