OpenAI Realizes It Made a Terrible Mistake

Openai claims to discover what leads hallucinations or the strong tendency of artificial intelligence models to create incorrect answers in reality.
It is a big problem that affects the entire industry, which greatly reduces the benefit of technology. Worse, experts found that the problem is getting worse because artificial intelligence models become more capable.
As a result, although astronomical expenditures incurred them, FRONTIERE AI Models are still vulnerable to submitting inaccurate claims when facing a claim they do not know to answer.
Whether there is a solution to the problem remains a very soft theme, as some experts argue that hallucinations are essential for technology itself. In other words, large language models may serve as a dead end in our endeavor to develop AIS with reliable awareness of realistic claims.
In a paper published last week, a team of Openai researchers tried to find an explanation. They suggest that the large language models are hallucinations because when created, they are motivated to guess instead of admitting that they simply do not know the answer.
The paper reads that hallucinations are “continuing because of the way most assessments are classified-language models are improved to be a good test numbers, and to guess when the test improves.”
Traditionally, the artificial intelligence product is classified in a bilateral way, and its reward when it gives a correct response and is punished when it gives one incorrect.
In simple phrases, in other words, the guess is rewarded – because it is may Be right – on AI admits that he does not know the answer, which will be classified as incorrect regardless of what.
As a result, through “natural statistical pressures”, LLMS is more likely to answer instead of “recognition of uncertainty”.
“Most of the results panels give priority to the classification on the basis of accuracy, but the errors are worse than refraining from abstinence,” Openai wrote in the accompanying blog post.
In other words, Openai says it – and all its imitators throughout the industry – made a serious structural error in how to train them in artificial intelligence.
There will be a lot of riding on whether the problem is correct to go forward. Openai claims that “there is a direct solution” to the problem: “punishing confident errors more than it punishes uncertainty, and gives partial credit to the appropriate expressions of uncertainty.”
To move forward, the assessments need to make sure that “its registration encourages guessing,” says the blog post. “If the main results panels continue in the fortunate guessing bonus, the models will continue to learn guessy.”
The researchers in the company concluded that “the simple adjustments of the prevailing evaluation can restore incentives, and reward the appropriate expressions for uncertainty rather than punish them.” “This can remove the barriers that prevent hallucinations, and opens the door to future work on fine language models, for example, with the richer pragmatic efficiency.”
How will these adjustments remain on the real worlds. While the company got the latest GPT-5 hallucinations, users have left not largely affected.
Currently, the artificial intelligence industry will have to continue calculating the problem because it justifies tens of billions of dollars in capital expenditures and high emissions.
“Hallucinations are still a fundamental challenge for all large language models, but we are working hard to increase their reduction,” Openai promised the blog post.
More about hallucinations: Users GPT-5 says huge realistic errors
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-09-14 14:30:00