Technology

The AI Industry’s Scaling Obsession Is Headed for a Cliff

New study from MIT suggests that larger, more computationally intensive AI models may soon offer diminishing returns compared to smaller models. By mapping scaling laws against continuing improvements in model efficiency, the researchers found that leaps in performance from giant models may become more difficult to achieve, while efficiency gains could make models running on more modest hardware increasingly capable over the next decade.

“In the next five to 10 years, things are very likely to start tightening up,” says Neil Thompson, a computer scientist and MIT professor involved in the study.

Jumps in efficiency, like those seen with a remarkably low-cost DeepSeek model in January, have been a reality check for the AI ​​industry, which is accustomed to burning through massive amounts of computing.

As it stands, a leading model from a company like OpenAI is currently much better than a model trained with a fraction of the computing from an academic lab. While the MIT team’s predictions may not be correct, for example, if new training methods like reinforcement learning lead to surprising new results, they suggest that big AI companies will have less of an advantage in the future.

Hans Gundlach, a research scientist at MIT who led the analysis, became interested in the issue because of the impractical nature of running sophisticated models. In collaboration with Thompson and Jason Lynch, another research scientist at MIT, he plotted the future performance of parametric models compared to those built with more modest computational means. Gundlach says the expected trend is particularly clear for inference models that are now in vogue, which rely more on additional calculations during inference.

Thompson says the results show the value of improving the algorithm as well as expanding the scope of computing. “If you’re spending a lot of money training these models, you should definitely spend some of it trying to develop more efficient algorithms, because that can be hugely important,” he adds.

This study is particularly interesting given today’s AI infrastructure boom (or should we say “bubble”?) — which shows little signs of slowing down.

OpenAI and other American technology companies have signed deals worth one hundred billion dollars to build artificial intelligence infrastructure in the United States. “The world needs more computing,” OpenAI president Greg Brockman declared this week when he announced a partnership between OpenAI and Broadcom for custom AI chips.

A growing number of experts question the safety of these deals. Nearly 60 percent of the cost of building a data center goes to GPUs, which tend to decline quickly. Partnerships between major players also appear circular and opaque.

Don’t miss more hot News like this! Click here to discover the latest in Technology news!

2025-10-15 18:00:00

Related Articles

Back to top button