Why the next phase of AI adoption will be determined less by models and more by data foundations
After two years of rapid experimentation and deployments, enterprises are now faced with the complex realities of scaling Artificial Intelligence (AI) systems into production. Business leaders are no longer asking whether AI can work; they are asking whether it delivers measurable value, operates reliably at scale, and can be trusted as part of core business processes.
Until now, progress in AI has largely been framed through the lens of model innovation. But as AI systems move closer to production, it is becoming clear that models are only one part of the system. Every layer of the enterprise data stack is now being stress-tested by AI workloads, exposing limitations that were manageable in a human-centric world but become critical failures at machine scale.
The infrastructure that powered the last decade of digital business was never designed for relentless, context-hungry AI systems operating continuously and autonomously. As enterprises confront this reality, the organisations that succeed in the next phase of AI adoption will not be those with the most advanced models, but those that rebuild their data foundations for continuous, governed, real-time operation.
Agentic AI rewards real-time, event-driven foundations
Unlike traditional AI applications that respond to discrete prompts, agents operate continuously. They observe events, query systems, make decisions, and take action, often without human intervention.
This makes them highly sensitive to the quality and responsiveness of the data and systems they depend on. When the data going into LLMs are stale and inconsistent, agent decisions degrade quickly.
Agents are also fundamentally greedy. They do not wait for business hours, batch their requests, or slow down. They operate at scales impossible for humans. An agent optimising a supply chain, detecting fraud, or monitoring inventory can generate more database queries in an hour than a team of analysts in a week.
Retail provides a useful illustration. As AI shopping agents begin to browse, compare, and recommend products on behalf of consumers, they place intense pressure on product, pricing, and inventory systems. And if inventory visibility is inaccurate or systems cannot respond in real time, they simply move on. This same dynamic is playing out across finance, logistics, energy, and manufacturing.
This is especially challenging because many enterprise databases are already struggling under existing workloads. Adding agentic AI on top of batch-based, legacy architectures is a recipe for cascading failures. Latency increases, systems time out, and operational risk rises precisely when businesses are relying on AI for speed and precision.
As a result, organisations are increasingly adopting patterns such as Change Data Capture (CDC) pipelines that stream near-real-time data into modern, massively scalable data platforms. Rather than allowing AI systems to drain core operational databases, data is kept in motion and made available in architectures designed to handle continuous demand. In 2026, this will shift from best practice to a requirement.
As AI systems act autonomously, trust and governance become the critical path
As AI scales, data governance moves from a background concern to a necessity. Human decision-makers can often compensate for missing context, institutional knowledge, or data quality gaps. AI systems cannot. Without the right context, AI applications fail, but security, privacy, and compliance controls must still be enforced.
When an AI agent makes a decision, whether approving a transaction, adjusting a price, or rerouting a shipment, organisations need programmatic access to its complete data lineage. And beyond just the simple lineage of supporting context, end-to-end data flow in agentic systems must be auditable and replayable for rapid and iterative improvement, troubleshooting, and compliance reasons.
In practice, this is one of the hardest problems enterprises face. It's not unusual to see a complicated multi-system data flow: data could start at the mainframe, move through message queues, be enriched via APIs, combined with relational databases, and then consumed by multiple downstream systems. While individual platforms may offer strong governance within their own boundaries, the gaps between platforms are where AI systems are most likely to break down.
Increasingly, organisations will be forced to double down on their data governance infrastructure. Cross-system data lineage is a particular pain point, yet it is critical as AI becomes operational. The handoffs between systems and the data flowing into and out of AI agents themselves require the same level of visibility and control as individual platforms. Without this, trust in autonomous systems quickly erodes.
The future of AI flexibility depends on where data lives
There is a growing misconception that AI flexibility is primarily about model choice. In reality, the bigger risk is ecosystem lock-in. The relative ease of switching between large language models in the past created a false sense of security, but today's model providers are rapidly building proprietary ecosystems that bundle agent frameworks, development tools, and tightly coupled data integrations. Connecting enterprise data and building agents entirely within these environments can leave organisations trapped, with migration costs astronomical if they choose to move elsewhere.
Because of this, many organisations will realise it's imperative to have strategic vendor independence built into their AI architecture from the start.
The emerging solution is an independent data plane. By decoupling data from AI tooling, organisations can keep their data portable while retaining the freedom to adopt best-of-breed technologies, agent frameworks, and execution environments as the market evolves.
Open standards also play a key role in achieving flexibility. Protocols such as the Model Context Protocol (MCP) are gaining traction precisely because they reduce friction between AI applications and enterprise data, regardless of the underlying model provider. As MCP adoption accelerates, platforms that fail to support open, interoperable access to data will increasingly be left behind.
Model innovation will continue at a rapid pace, but it will no longer be the primary determinant of AI success. The next generation of AI leaders will be defined by their ability to keep data in motion, in context, and under control.
The shift from AI pilots to production will expose a simple reality: AI ROI is earned in the data layer. Organisations that strengthen their data foundations with the same urgency they once applied to model selection will move faster, scale safely, and realise meaningful business impact.