IBM sees enterprise customers are using ‘everything’ when it comes to AI, the challenge is matching the LLM to the right use case

Join the event that the leaders of the institutions have been trusted for nearly two decades. VB Transform combines people who build AI’s strategy for real institutions. Learn more
Over the past hundred yearsand IBM has seen many different technical trends rising and falling. What tends to win is the techniques in which there is an option.
At VB Transform 2025 today, Armand Ruiz, VP of AI Platform in IBM detailed how Big Blue thinks about artificial intelligence and how the institution users actually publish this technology. The main topic that RUIZ has confirmed is that at this point, it is not about choosing a single language model or technique (LLM). Increasingly, institutional customers systematically reject artificial intelligence strategies for a single seller in favor of multiple models that match the specified LLMS with targeted use cases.
IBM has open source Amnesty International models with the Granite family, but it does not determine the development of this technology as the only option, or even the right option for all work burden. This institution’s behavior pushes IBM to the position of itself not like a primary model competitor, but as RUIZ is referred to as a control tower for the burdens of the work of Amnesty International.
“When I sit in front of a customer, they use everything they can reach, everything,” Ruiz explained. “For coding, they love anthropor and some other use cases such as logic, they love O3 and then to customize LLM, with their own data and their exact control, and they love either our or the wrong granite chain with their small models, or even Llama … It is just a LLM match with the right use. Then we help them well.”
Multi-Llm Gate Strategy
IBM’s response to this reality in the market is a newly released model gate that provides institutions for one applications programming interface to switch between different LLMS while maintaining observation and governance via all publishing operations.
Technical architecture for customers allows operating open source models on their inference staple for sensitive use situations with one -time public application programming facades such as AWS BEDROCK or Google Cloud’s Gemini for less important applications.
“This gate provides our customers with one layer with one applications interface to switch from LLM to another LLM and add observation and governance all the time,” said Ruiz.
The approach directly contrasts with the joint seller’s strategy to close customers in special ecosystems. IBM is not alone in following a multi -seller approach to choose models. Multiple tools have appeared in recent months to guide models, which aim to direct work burdens to the appropriate model.
The commitment protocols appear as a critical infrastructure
Besides multi -mode management, IBM addresses the emerging challenge to communicate from agent to agent through open protocols.
The company has developed the ACP (the agent’s connection protocol) and contributed to the Linux Foundation. ACP is a competitive effort for Google Agent2agen (A2A), which Google contributed this week at Linux.
Ruiz pointed out that both protocols aim to facilitate communication between agents and reduce allocated development work. It is expected that the various methods are currently converging, currently, the differences between A2A and ACP are often technology.
The conjunction protocols provide unified ways of artificial intelligence systems to interact through different platforms and sellers.
The technical importance becomes clear when considering the institution’s scale: Some IBM customers already have more than 100 agents in experimental programs. Without uniform communication protocols, each agent’s interaction requires a dedicated development agent, which creates an incompatible integration burden.
Artificial intelligence revolves around the transformation of work tasks and the way the work is done
Regarding how Ruiz that affects institutions that affect today, it indicates that it must be more than just chatbots.
“If you are just providing chat, or you are just trying to do costs with artificial intelligence, you do not do artificial intelligence,” said Ruiz. “I think artificial intelligence is about to completely transform the workflow and the way the work is done.”
The distinction between the implementation of artificial intelligence and AI focuses on the depth of technology in the current commercial processes. IBM’s internal human resources example shows this transformation: instead of having human resources information, agents who are now dealing with routine information about compensation, employment and promotions, automatically guidance to appropriate systems and escalating for humans only when necessary.
“I was spending a lot of time talking to my human resources partners to get a lot of things. I deal with most of them now with the human resource agent,” Ruiz explained. “Based on the question, if it is related to compensation or something only related to dealing with separation, employing someone, or making a promotional offer, then all of these things will communicate with various internal systems of human resources, and that will be like separate agents.”
This is an essential architectural shift from the patterns of human computer interaction to the automation of the workflow that takes place by computer. Instead of employees learn to interact with the tools of artificial intelligence, artificial intelligence learns to carry out full -to -party businesses.
Technical effects: Institutions need to exceed API and engineering demanding the devices of the deep process that allows the artificial intelligence agent to carry out multi -step workflow tasks independently.
investment">The strategic effects of AI’s investment of the institution
IBM realistic publishing data indicates many important transformations of the Foundation’s AI’s strategy:
Abandoning the first chatbot thinkingInstitutions must determine the full workflow tasks to transform instead of adding conversation interfaces to the current systems. The goal is to eliminate human steps, not improving the interaction of human computer.
Architect for multi -style flexibility: Instead of adhering to artificial intelligence providers, institutions need integration platforms that enable switching between models based on the requirements of the state of use while maintaining the criteria of governance.
Investing in communications standardsInstitutions must give priority to artificial intelligence tools that support emerging protocols such as MCP, ACP and A2A instead of royal integration methods that create a seller lock.
“There is a lot for construction, and I still say that everyone needs to learn artificial intelligence, especially business leaders, who must be Amnesty International leaders and understand concepts,” said Ruiz.
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-06-25 20:42:00