Language models have been conditioned to hazard wild guesses instead of admitting ignorance, a study has found The company behind ChatGPT has addressed the persistent problem of Artificial Intelligence models’ generating plausible but false statements that it calls “hallucinations”.In a statement on Friday, OpenAI explained that models are typically encouraged to hazard a guess, however improbable, as opposed to acknowledging that they cannot answer a question.The issue is attributable to the core principles underlying “standard training and evaluation procedures,” the company added.OpenAI has revealed that the instances where language models “confidently generate an answer that isn’t true” have continued to plague even newer, more advanced iterations, including its latest flagship GPT‑5 system.According to the findings of a recent study, the problem is rooted in the way language models’ performance is usually evaluated at present, with the guessing model ranked higher than a careful one that admits uncertainty. Under the standard protocols, AI systems learn that failure to generate an answer is a surefire way to get zero points on a test, while an unsubstantiated guess may just prove to be correct. “Fixing scoreboards can broaden adoption of hallucination-reduction techniques,” the statement concluded, acknowledging, however, that “accuracy will never reach 100% because, regardless of model size, search and reasoning capabilities, some real-world questions are inherently unanswerable.”