What is AI hallucination?
AI hallucination is when a language model generates confident, fluent, but factually incorrect information. The model does not "know" it is wrong — it produces text that is statistically plausible based on training patterns, regardless of factual accuracy. Examples: citing papers that do not exist, inventing statistics, attributing quotes to wrong people, describing features that a product does not have, and fabricating legal precedents. Hallucination is inherent to how LLMs work — they predict likely next tokens, not verified facts. Causes: the model extrapolates beyond its training data, combines fragments from different contexts, and lacks the ability to verify claims against a ground truth. Mitigation strategies: 1) RAG — ground the model in retrieved documents so it answers based on real sources, not training data. 2) Temperature control — lower temperature (0.0-0.3) reduces creative outputs and hallucination. 3) Prompt engineering — instruct the model to say "I don't know" when uncertain, to cite sources, and to distinguish between facts and inferences. 4) Guardrails — validate outputs against known constraints (check that cited URLs exist, verify numbers against a database). Hallucination cannot be eliminated completely. Design your application to handle it — use RAG for factual tasks, add verification layers, and never deploy LLM outputs without review for high-stakes decisions.