Designing digital resilience in the agentic AI era
While global investment in AI is expected to reach $1.5 trillion in 2025, less than half of business leaders are confident in their organizations’ ability to maintain service continuity, security, and cost control during unexpected events. This lack of trust, combined with the profound complexity imposed by autonomous AI decision-making and interaction with critical infrastructure, requires a reimagining of digital resilience.
Organizations are turning to the concept of a data fabric, an integrated architecture that connects and governs information across all business layers. By breaking down silos and enabling real-time access to enterprise-wide data, a data fabric can enable human teams and agent AI systems to sense risks, prevent problems before they occur, recover quickly when they do occur, and maintain operations.
Machine data: the cornerstone of effective artificial intelligence and digital resilience
Previous AI models relied heavily on human-generated data such as text, audio, and video, but agentic AI requires deep insight into an organization’s machine data: logs, metrics, and other telemetry generated by devices, servers, systems, and applications.
To use agent AI to enhance digital resilience, it must have seamless, real-time access to this data stream. Without comprehensive integration of machine data, organizations risk limiting AI capabilities, missing critical anomalies, or introducing errors. As Kamal Hathi, senior vice president and general manager of Splunk, a Cisco company, emphasizes, agentic AI systems rely on machine data to understand context, simulate outcomes, and continually adapt. This makes automated data monitoring a cornerstone of digital resilience.
“We often describe machine data as the heartbeat of modern enterprises,” Hathi says. “Agent AI systems are powered by this vital pulse, requiring access to real-time information. It is essential that these intelligent agents operate directly on the complex flow of machine data and that the AI itself is trained using the same data flow.”
Few organizations currently achieve the level of machine data integration required to fully enable agentic systems. Not only does this narrow the range of potential use cases for agentic AI, but worse, it can also lead to data anomalies and errors in outputs or actions. Natural language processing (NLP) models designed before the development of generative pre-trained transformers (GPTs) were plagued by linguistic ambiguities, biases, and inconsistencies. Similar mistakes can occur with agentic AI if organizations rush forward without providing models with basic fluency in machine data.
For many companies, keeping up with the dizzying pace at which artificial intelligence is advancing has been a major challenge. “In some ways, the speed of this innovation is starting to hurt us, because it creates risks that we are not prepared for,” Hathi says. “The problem is that as agentic AI evolves, relying on traditional MBAs trained on human text, voice, video or printed data doesn’t work when you need your system to be secure, resilient and always available.”
Data fabric design for resilience
To address these shortcomings and build digital resilience, technology leaders must focus on what Hathi describes as data fabric design, best suited to the requirements of agentic AI. This involves bringing together fragmented assets across security, IT, business processes and network to create an integrated architecture that connects disparate data sources, breaks down silos, and enables real-time analysis and risk management.
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-11-20 14:30:00



