Chain-of-Verification Reduces Hallucination in Large Language Models
Abstract
Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue in large language models. We study the ability of language models to deliberate on the responses they give in order to correct their mistakes. We develop the Chain-of-Verification (CoVe) method whereby the model first (i) drafts an initial response; then (ii) plans verification questions to fact-check its draft; (iii) answers those questions independently so the answers are not biased by other responses; and (iv) generates its final verified response. In experiments, we show CoVe decreases hallucinations across a variety of tasks, from list-based questions from Wikidata, closed book MultiSpanQA and longform text generation.
Community
Very interesting. I recently created a basic way of doing this for complex blog article creation. Here is a short video: https://www.youtube.com/watch?v=RWCW648l8Ls
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Knowledge-Driven CoT: Exploring Faithful Reasoning in LLMs for Knowledge-intensive Question Answering (2023)
- Cognitive Mirage: A Review of Hallucinations in Large Language Models (2023)
- Halo: Estimation and Reduction of Hallucinations in Open-Source Weak Large Language Models (2023)
- DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models (2023)
- Contrastive Decoding Improves Reasoning in Large Language Models (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
Super interesting in practice! I made a simple chain-of-verification (CoVe) app. You can try running CoVe to verify prompts like "Name 10 basketball players with 3 MVP awards". Try it here: https://chain-of-verification.streamlit.app/
Say Goodbye to AI Hallucinations: How Chain-of-Verification Makes LLMs Smarter
Links π:
π Subscribe: https://www.youtube.com/@Arxflix
π Twitter: https://x.com/arxflix
π LMNT (Partner): https://lmnt.com/
Name 10 basketball players with 3 MVP awards
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper