Large Vision-Language Models (LVLMs) often produce responses that misalign with factual information, a phenomenon known as hallucinations. While hallucinations are well-studied, the exact causes behind them remain underexplored. In this paper, we first investigate the root causes of hallucinations in LVLMs. Our findings reveal that existing mitigation techniques primarily reduce hallucinations for visual recognition prompts—those that require simple descriptions of visual elements—but fail for cognitive prompts that demand deliberate reasoning.
We identify the core issue as a lack of true visual perception in LVLMs: although they can accurately recognize visual elements, they struggle to fully interpret these elements in the context of the input prompt and effectively link this recognition to their internal knowledge, which is critical for reasoning. To address this gap, we introduce Visual Description Grounded Decoding (VDGD), a simple, robust, and training-free method designed to enhance visual perception and improve reasoning capabilities in LVLMs.
VDGD works by first generating a detailed description of the image and appending it as a prefix to the instruction. During response generation, tokens are sampled based on their KL divergence to the description, favoring candidates with lower divergence. Experimental results on multiple visual reasoning benchmarks and LVLMs demonstrate that VDGD consistently outperforms existing baselines by 2% - 33%. Finally, we introduce VaLLu, a benchmark designed for comprehensive evaluation of the cognitive capabilities of LVLMs.
1. While prior techniques work well for visual recognition tasks, they fail when applied to cognitive prompts requiring reasoning.
2. We categorize hallucinations into four types: Language, Vision, Style, and Instruction Tuning (IT). Existing methods only mitigate a subset.
3. LVLMs can recognize visual elements but struggle to link them with internal knowledge, leading to incorrect reasoning.
VDGD works by first generating a detailed description of the image and appending it as a prefix to the instruction. During response generation, tokens are sampled based on their KL divergence to the description, favoring candidates with lower divergence.
Aspect | VDGD | Other Baselines |
---|---|---|
1. Training Requirement | Training-free approach that appends a detailed image description and uses KL divergence for robust decoding. | Often require additional training, fine-tuning, or specialized modules to address object hallucinations. |
2. Scope of Mitigation | Targets all forms of hallucinations—especially those in cognitive prompts—by bridging the "visual perception gap." | Primarily reduce object-based or "visual recognition" hallucinations; limited efficacy on more complex, reasoning-intensive tasks. |
3. Performance Gains | Consistently outperforms baselines by 2%–33% on multiple benchmarks, improving both reasoning and recognition accuracy. | Show smaller or no gains on cognitive prompts requiring extended reasoning or domain knowledge. |
We evaluate VDGD on multiple benchmarks, demonstrating improvements of 2%-33% over existing techniques.
We illustrate several instances from VaLLu and compare their responses for LLaVA-1.5 with Greedy, VCD, and VDGD Decoding.
VaLLu consists of 1,500 instances sourced from multiple benchmarks, including Oven, MMMU, MMC, MathVista, HallusionBench, MATH-Vision, and MME. It focuses exclusively on open-ended generation tasks, excluding Yes/No and multiple-choice questions, to focus on evaluating diverse forms of hallucination by LVLMs.
The dataset is carefully curated to balance affordability and task diversity, ensuring a comprehensive evaluation. Additionally, VaLLu undergoes manual filtering to remove noisy samples (we find existing benchmarks to have noisy samples as shown below) and is enriched with meta-data annotations and expert-provided responses for high-quality benchmarking.