D
Doc and Tell
Glossary/ai/ml
ai/ml

Hallucination

When an AI model generates plausible-sounding but factually incorrect or fabricated information.

Hallucinations occur because language models predict statistically likely text rather than verified facts. The model may invent citations, misstate numbers, or confidently assert claims that have no basis in the source material.

In regulated industries, hallucinations are not just annoying — they are dangerous. RAG-based architectures with citation verification significantly reduce hallucination risk by constraining the model to answer only from retrieved source documents.

Analyze Documents Related to Hallucination

Upload any document and get AI-powered analysis with verifiable citations.

Start Free