# Why Language Models Hallucinate > ## Abstract ## Abstract > Like students facing hard exam questions, [large language models](https://wiki.g15e.com/pages/Large%20language%20model.txt) sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such "[Hallucination|hallucinations](https://wiki.g15e.com/pages/Hallucination%20(AI.txt))" persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the [training](https://wiki.g15e.com/pages/Training%20(machine%20learning.txt)) and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysterious—they originate simply as errors in [binary classification](https://wiki.g15e.com/pages/Binary%20classification.txt). If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are graded—language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This "epidemic" of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems. https://cdn.openai.com/pdf/d04913be-3f6f-4d2b-b283-ff432ef4aaa5/why-language-models-hallucinate.pdf ## Introduction > We identify the main statistical drivers of hallucinations, from their pretraining origins to their post-training persistence. A novel connection between supervised and unsupervised learning demystifies their origin, even when training data contain IDK. The persistence of hallucinations, despite extensive work on the problem, is explained by the recognition that hallucination-like guessing is rewarded by most primary evaluations. We discuss statistically rigorous modifications to existing evaluations that pave the way to effective mitigation. ## Related work ## Pretraining Errors ## Post-training and hallucination 모르면 모른다고 하도록 유도하기 위해 "I don't know"는 0점, 정답은 1점, 오답은 패널티 점수를 주기. 예를 들어 t가 0.9이면 패널티는 -9점. 따라서 90% 이상 확실한 경우에만 답을 하는 게 합리적인 전략: > we propose evaluations explicitly state confidence targets in their instructions, within the prompt (or system message). For example, one could append a statement like the following to each question: > > "Answer only if you are > t confident, since mistakes are penalized t/(1 − t) points, while correct answers receive 1 point, and an answer of 'I don't know' receives 0 points." 기존에 쓰던 평가 방식들을 다같이 바꿔야 함: > … additional hallucination evaluations may not suffice when the primary evaluations penalize honestly reporting confidence and uncertainty. ## Discussion and limitations ## Conclusions