AI hallucinations
OpenAI Publishes Rare Paper: We Found the Culprit Behind AI Hallucinations
OpenAI’s new paper explains why language models hallucinate: accuracy metrics reward guessing. It proposes penalizing confident errors, rewarding uncertainty.