Improvements in ‘reasoning’ AI models may slow down soon, analysis finds

An analysis by EPOCH AI, the Non -profit Amnesty International Research Institute, indicates that the artificial intelligence industry may not be able to obtain tremendous performance gains from the signs of artificial intelligence for a much longer period. Once progress is slowing from thinking models within a year, according to the results of the report.
Thinking models such as Openai’s O3 have led to great gains in artificial intelligence standards in recent months, especially standards that measure mathematics and programming skills. Models can apply more computing to problems, which can improve their performance, with the negative side that it takes longer than traditional models to complete the tasks.
Thinking models were developed through the first training on a traditional model on a huge amount of data, then a technology application called reinforcement learning, which effectively gives the model “reactions” to its solutions to difficult problems.
To date, Frontier AI Laboratory Laborators such as Openai have not applied an enormous amount of computing strength to the reinforcement learning stage for training form, according to EPOCH.
This changes. Openai said that it has applied about 10x computing to train O3 more than its predecessor, O1, and Epoch speculates that most of this computing was allocated to enhance learning. Openai Dan Roberts has recently revealed that the company’s future plans call for the priority of the reinforcement learning to use the computing power much more, even more than the initial training.
But there is still a higher limit to the amount of computing that can be applied to reinforcement learning, for each period.
Josh You, an EPOCH analyst and author of the analysis, explains that performance gains from standard training of artificial intelligence are currently every year, while performance gains from reinforcement grows ten times every 3-5 months. Thinking training will continue to “may converge with the total borders by 2026.”
EPOCH analysis makes a number of assumptions, and is partially dependent on the general comments of AI’s executives. But it also makes the situation in which the decline in thinking models may be proven that they are difficult for reasons besides computing, including the highest costs of research.
“If there is a continuous general cost required for research, the thinking forms may not expand as expected,” he writes to you. “Quick scaling is a very important component of the thinking model, so it is worth tracking it closely.”
It is possible that any indication is possible that thinking models may somewhat in the near future to worry about the artificial intelligence industry, which has invested huge resources to develop these types of models. Indeed, studies have shown that thinking models, which can be incredibly expensive, have dangerous defects, like a tendency to hallucinations more than certain traditional models.
Don’t miss more hot News like this! Click here to discover the latest in Technology news!
2025-05-12 22:36:00