Mitigating Hallucinations in Large Language and Vision Models

The field of large language and vision models is moving towards addressing the issue of hallucinations, which are instances where models produce incorrect or nonsensical outputs. Researchers are working on developing methods to mitigate hallucinations, including training-free and self-supervised approaches, as well as techniques that utilize phrase-based fuzzing and layer contrastive decoding. These innovative approaches aim to reduce hallucinations and improve the overall reliability of large language and vision models. Notable papers in this area include Exposing Hallucinations To Suppress Them, which proposes a novel method for hallucination mitigation, and GHOST, which introduces a method for generating images that induce hallucination. Additionally, the Review of Hallucination Understanding in Large Language and Vision Models provides a unified framework for characterizing hallucinations and links them to specific mechanisms within a model's lifecycle.

Sources

Are Hallucinations Bad Estimations?

Exposing Hallucinations To Suppress Them: VLMs Representation Editing With Generative Anchors

Library Hallucinations in LLMs: Risk Analysis Grounded in Developer Queries

HFuzzer: Testing Large Language Models for Package Hallucinations via Phrase-based Fuzzing

Mitigating Hallucination in Multimodal LLMs with Layer Contrastive Decoding

GHOST: Hallucination-Inducing Image Generation for Multimodal LLMs

Machine Learning Algorithms for Improving Black Box Optimization Solvers

Review of Hallucination Understanding in Large Language and Vision Models

Black-Box Combinatorial Optimization with Order-Invariant Reinforcement Learning

FalseCrashReducer: Mitigating False Positive Crashes in OSS-Fuzz-Gen Using Agentic AI

Built with on top of