AI

Finding return on AI investments across industries

The market has officially been three years post-ChatGPT and many pundits have turned to using terms like “bubble” to suggest reasons why generative AI is not generating material returns outside of a few technology vendors.

In September, the MIT NANDA report made waves because the sound bite every author and influencer picked up on was that 95% of all AI pilots failed to scale or deliver a clear, measurable ROI. McKinsey had previously published a similar trend indicating that agentic AI will be the way forward to bring huge operational benefits to companies. in The Wall Street JournalAt the AI ​​Technology Council Summit, AI technology leaders recommended that IT managers stop worrying about AI ROI because measuring gains is difficult, and if they try, the measurements will be wrong.

This puts technology leaders in a risky position – strong technology stacks already support their business operations, so what is the upside to introducing new technology?

For decades, deployment strategies have followed a steady rhythm in which technology players avoid destabilizing business-critical workflows in order to replace individual components in technology stacks. For example, better or cheaper technology will be meaningless if it jeopardizes disaster recovery.

While the price may rise when a new buyer takes over mature middleware, the cost of losing a portion of your organization’s data because you’re midway through moving your organization to a new technology is much more serious than paying a higher price for stable technology that you’ve run your business on for 20 years.

So, how do companies get a return on investment in the latest technology transformations?

The first principle of AI: Your data is your value

Most articles about AI data relate to engineering tasks to ensure that the AI ​​model makes inferences against business data in warehouses that represent past and current business realities..

However, one of the most common use cases in enterprise AI starts with rolling out an AI model by uploading file attachments to the model. This step narrows the scope of the AI ​​model to include the content of uploaded files, speeding up accurate response times and reducing the number of prompts required to get the best answer.

This tactic relies on sending your own business data to the AI ​​model, so there are two important considerations to make in parallel with data preparation: first, control your system for proper confidentiality; Second, develop a thoughtful negotiation strategy with typical vendors, who cannot develop their own frontier models without access to non-public data, such as your business data.

Recently, Anthropic and OpenAI closed huge deals with enterprise data platforms and their owners due to the lack of enough high-value raw data publicly available on the Internet.

Most companies will automatically prioritize the confidentiality of their data and design business workflows to preserve trade secrets. From an economic value standpoint, especially given the true cost of each API form call, exchanging selective access to your data for services or price compensation may be the right strategy. Instead of treating the purchasing/onboarding model as a typical supplier/procurement practice, consider whether developing your supplier model and having your business adopt the model can have mutual benefits.

The second principle of AI: Boring by design

According to Information is Beautiful, in 2024 alone, 182 new AI models will be introduced to the market. When GPT5 entered the market in 2025, many models 12 to 24 months earlier became unavailable until subscription customers threatened to cancel. Previously stable AI workflows are built on models that no longer work. Their technology providers thought customers would be excited about the latest models and did not realize the importance their business workflows placed on stability. Video game players are happy to upgrade their custom builds for the life of the system components in their consoles, and will upgrade the entire system just to play a newly released title.

However, behavior does not translate into operations at the operating rate of the business. While many employees may be using the latest models to process documents or create content, back-office operations cannot afford to keep switching the technology stack three times a week to keep up with the latest models. Back office work is boring by design.

The most successful AI deployments have focused on deploying AI to business problems unique to their businesses, often running in the background to speed up or augment mundane but mandatory tasks. Relieving statutory or expense audits from having to manually check individual reports but putting the final decision in the humans’ area of ​​responsibility combines the best of both.

The point is that none of these tasks require constant updates to the latest model to deliver this value. This is also an area where abstracting your business workflow from using direct modular APIs can provide additional long-term stability while maintaining options to update or upgrade the underlying engines at the pace of your business.

The third principle of artificial intelligence: pickup truck economics

The best way to avoid inverted economics is to design systems to conform to users rather than to vendor specifications and benchmarks.

Many companies continue to fall into the trap of purchasing new equipment or new types of cloud services based on new vendor-driven standards rather than starting their AI journey from what their business can consume, and at what pace, based on the capabilities they have deployed today.

While Ferrari’s marketing is effective and those cars are really cool, they drive just as quickly through school zones and lack enough trunk space for groceries. Keep in mind that every server and remote model a user touches layers on costs and design in order to save by reconfiguring workflows to reduce spending on third-party services.

Many companies have found that AI workflows to support their customers add millions of dollars in operational throughput costs and end up adding more development time and cost to update the implementation for OpEx predictability. Meanwhile, companies that decided the system would run as fast as a human can read it — less than 50 codes per second — were able to successfully deploy large-scale AI applications with minimal overhead.

There are many aspects of this new automation technology to explain – the best guidance is to start hands-on, design for independence in core technology components to avoid disrupting stable applications over the long term, and leverage the fact that AI technology makes your business data valuable to advance your technology vendors’ goals.

This content was produced by Intel Corporation. It was not written by the editorial staff of MIT Technology Review.

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-10-28 15:00:00

Related Articles

Back to top button