AI

Less is more: Meta study shows shorter reasoning improves AI accuracy by 34%


Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more


Researchers from the Meta Fair Fean team and the Hebrew University of Jerusalem discovered that forcing large language models to “think” is less actually improves their performance in complex thinking tasks.

The study, which was published today, found that shorter thinking operations in artificial intelligence systems lead to more accurate results while significantly reducing mathematical costs.

“In this work, we challenge the assumption that long thinking chains lead to better capabilities of thinking,” the authors write in their paper entitled.Do not think about it. Favorite chains of short thinking to improve thinking in llm

Research contrasts with the prevailing trend in developing artificial intelligence, as companies have invested extensively in increasing computing resources to allow resources to perform wide thinking through long “thinking chains”-detailed step-by-step paths used by artificial intelligence systems to solve complex problems.

The resolution of artificial intelligence jumps by 34 % when models use shorter thinking chains

The researchers discovered that within the same task of thinking, “Luxor thinking chains are likely to resulted in the right answers – up to 34.5 % more accurate than the longest series of samples of them for the same question.” This discovery is true through several pioneering models and standards of Amnesty International.

“With an impressive results, [extensive reasoning] Authors refer to great mathematical costs and the time of reasoning, “noting a great inefficiency in how these systems are currently spreading.

Based on these results, the team has developed a new approach called “Mort-M@K”, which carries out multiple thinking attempts in parallel but stops the account once the first few operations are completed. Then the final answer is chosen by the majority vote between these shorter chains.

New method ‘Short-m@K’ transmits computing costs by 40 % with increased performance

For institutions that publish large male thinking systems, the effects can be large. The researchers found that their method can reduce arithmetic resources by up to 40 % while maintaining the same level as standard methods.

“Short 3@K, although it is slightly less efficient than short 1@K, the majority voting constantly exceeds all mathematical budgets, while it is still largely faster (up to 33 % of the wall time),” says the paper.

Michael Hasid, the main author of the paper, and his team discovered that training artificial intelligence models on examples of shorter thinking improves its performance – another basic assumption challenge in developing artificial intelligence.

The researchers write: “Luxor training leads to better performance,” the researchers writes. “On the contrary, thyroiditis on the S1 increases the time of thinking with the lack of significant gains in performance.”

Technology giants can save millions by implementing a “do not think about it” approach

The results come in a crucial time for the artificial intelligence industry, as companies are racing to spread strong models that consume huge mathematical resources.

“The results we have reached referring to rethinking the methods of calculating the test time in thinking about LLMS, while emphasizing that the longer” thinking “does not necessarily translate to improved performance and can, inappropriately, lead to deteriorating results.

This research contradicts the opposite of other prominent methods. Previous influential studies, including Openai’s work on the “Idea” and “self -compatibility” methods, have generally called for the most comprehensive thinking. It also depends on the last work such as the “PRINCETON Tree” framework, Google DeepMind, and the “Carnegie Mellon” method for Carnegie Mellon, which explored different approaches to the logic of artificial intelligence.

For technical decision makers who evaluate artificial intelligence investments, the research indicates that the largest and more calculation is not always better. The study indicates possible cost savings and improving performance by improving efficiency rather than raw computing strength.

In an expanded industry, it turns out that teaching artificial intelligence is more brief not only to provide computing power – it makes machines more intelligent. Sometimes, even artificial intelligence benefits from ancient wisdom: don’t think about it.


Don’t miss more hot News like this! Click here to discover the latest in AI news!


2025-05-28 19:17:00

Related Articles

Back to top button