The Hidden Cost of Letting AI Agents Query Live Systems

Every DBA has seen it at least once. You are looking at the production database and something feels wrong. CPU is climbing. Logical reads are surging. One or two sessions are consuming far more resources than everything else combined.

You drill down into the SQL.

The query is unfamiliar. It joins more tables than you would expect, sometimes even joining the same table multiple times. The execution plan is complex, the logical reads enormous.

The first question is always the same: where did this query come from?

Often the answer is simple. A new application feature has been released. Somewhere inside it is a query that looked perfectly reasonable in testing. Run once or twice, it behaved well enough.

Production tells a different story.

Now the query is executed repeatedly. Sometimes concurrently. Sometimes triggered by user behaviour the test harness never simulated. Occasionally the query runs long enough for the application to time out, so the user refreshes the page, launching another execution while the first one is still running.

What looked harmless in isolation becomes dangerous when multiplied across real workloads.

But a DBA usually has one advantage: you can usually find the human behind the query.

Someone wanted to know something. Once you understand that intent, the workload can usually be controlled. Run it less often. Move it elsewhere. Rewrite it entirely.

Operational databases survive because their workloads are ultimately bounded by human intent. AI agents change this equation in a subtle but important way.


When the Database Becomes Part of the Reasoning Process

Traditional enterprise applications interact with databases in predictable ways.

An application developer writes the query. It is tested, reviewed, and eventually deployed. Over time the database develops a recognisable workload pattern. DBAs learn those patterns, optimise them, and build operational safeguards around them.

Even complex systems ultimately follow this model. A user action triggers application logic, which triggers a known set of database queries.

AI agents behave differently.

Instead of executing a predefined query pattern, an agent may generate queries dynamically as part of a reasoning process. It retrieves data, evaluates the result, refines the question… and queries again.

In other words, the database is no longer simply answering a question. It becomes part of the mechanism the system uses to figure out the answer.

This subtle shift has an important consequence.

Applications constrain database workloads.

Agents amplify them.


The Amplification Effect

A traditional application might translate a single user action into one or two database queries.

An agent may translate the same request into many.

The agent retrieves information, evaluates it, and then decides what to ask next. It may retry a query with different parameters, join additional tables, or explore alternative paths through the data.

Some systems do this only once or twice. More autonomous agents may repeat the process multiple times in a reasoning loop.

From the database’s perspective, a single human question can now generate many queries. In simple retrieval pipelines this may only be a handful. In more autonomous reasoning systems it can grow to dozens — sometimes even hundreds — as the agent explores the data.

Multiply that pattern across thousands of employees interacting with AI systems, and the database workload begins to look very different from the one the system was originally designed to support.

This is not necessarily a flaw. It is simply how exploratory systems behave. But it does change the shape of the workload interacting with production systems.


Why Traditional Controls Are Not Enough

Experienced DBAs will recognise that production databases already include many mechanisms designed to control runaway workloads.

Connection limits, query governors, resource groups, and timeouts exist precisely because poorly behaved queries can destabilise shared systems.

These controls work well when applications generate predictable query patterns.

What they are less suited to is a workload whose shape is not known in advance.

Agentic systems can generate new queries dynamically, retry failed steps, or explore alternative reasoning paths. The individual queries may not be problematic on their own. The challenge is the emergent behaviour that appears when many such queries interact with a production workload.

The result is a new class of operational uncertainty. In practice this can manifest as unexpected infrastructure consumption, runaway query costs in cloud databases, or production incidents triggered by exploratory workloads interacting badly with operational traffic.

When a traditional query misbehaves, a DBA can usually trace it back to a specific application feature or report. With agentic systems, the activity observed in the database may simply be a side effect of an evolving reasoning process.

Understanding why a query is running becomes much harder.


Enterprise Architectures Will Evolve

For this reason, the architecture implied by many early AI demonstrations is unlikely to survive unchanged in large enterprise environments.

Those demonstrations often show AI agents querying operational databases directly. For simple scenarios, that approach works well enough.

At scale, however, exploratory workloads interacting directly with systems of record create operational and economic pressures that are difficult to manage.

The likely response is architectural evolution.

Instead of allowing agents to query operational systems directly, enterprises will increasingly introduce intermediate layers designed to absorb exploratory behaviour. These layers might include analytical replicas, semantic data services, curated retrieval surfaces, vector indexes, materialised views, or other forms of controlled query infrastructure.

What matters is not the specific technology. It is the architectural separation.

Systems of record remain the authoritative source of enterprise data. But the exploratory behaviour of AI agents will increasingly occur one step removed from those systems.

Until architectures evolve to contain that behaviour, many enterprise AI initiatives will struggle to deliver the level of return on investment that early demonstrations appear to promise.

Not because the models fail, but because the surrounding architecture has not yet caught up with how those models actually behave.

This article is part of the Databases in the Age of AI series.

The Cloud Is Built on Uniformity. Databases Are Not

When engineers build infrastructure, they aim for stability. Systems must behave predictably under load. They must survive success as well as failure. Growth is assumed. If the system is successful, it will be stressed. The challenge is ensuring that growth does not destabilise the structure meant to support it.

Uniform Supply, Irregular Demand

The public cloud addresses this challenge through uniformity. Compute, storage and network capacity are delivered as standardised building blocks. Instance sizes are predefined. Performance characteristics are abstracted. Capacity can be replicated across availability zones and regions with remarkable consistency. This is the operational achievement of Amazon Web Services, Microsoft Azure and Google Cloud. Uniform supply enables hyperscale.

It is an extraordinary achievement. It is also deliberately impersonal.

Enterprise systems of record operate under different conditions. They reflect the behaviour of real customers and real businesses. Demand is rarely smooth. It clusters around product launches, reporting cycles, market opens, regulatory deadlines and seasonal peaks. A transactional database sits at the centre of this behaviour. It absorbs whatever the business generates.

It does not get to choose the shape of that demand.

Cloud infrastructure assumes that workloads can be distributed horizontally across uniform units. Systems of record often experience concentrated pressure at specific moments in time. This is not a flaw, although it can feel like one at 9:01 on Monday when performance dashboards are flashing red. It reflects the difference between uniform supply and irregular demand.


Statistical Guarantees, Deterministic Commit

The distinction becomes clearer when examining reliability. Cloud platforms express resilience statistically. Availability is measured in percentages. Durability in strings of nines. Performance in ranges and envelopes. At fleet scale, these models are extraordinarily effective.

Most of the time, that is enough.

Transactional systems optimise for a different property. When a transaction commits, it must commit correctly and in order. When state changes, it must change once. The authoritative copy of truth cannot be approximate. Modern database engines distribute storage and parallelise reads. But the semantics of correctness still converge at the transaction boundary. Correctness is deterministic even when the infrastructure beneath it is statistical.

Cloud platforms optimise for aggregate behaviour across fleets. Transactional systems optimise for the one transaction that must not be wrong. Both approaches are rational. They are aimed at different targets — and those targets matter most under stress.


Where the Models Converge

The economic model reinforces the distinction. Elasticity allows capacity to expand and contract in response to demand. Stateless services align naturally with this pattern. Horizontal replication tends to make cost proportional to usage.

That is the promise.

Systems of record often scale differently. Reducing latency, increasing memory and tightening the coupling between compute and data frequently precede horizontal distribution. Cloud providers offer very large instance types and strong performance isolation. It is entirely possible to achieve impressive stability and scale in this way — and not infrequently, equally impressive cloud bills.

However, as performance guarantees increase, elasticity typically narrows. Dedicated capacity replaces shared pools. Provisioned throughput replaces best-effort scheduling. Predictability rises. Flexibility declines. These are trade-offs rather than failures. They always have been.

For many years, architects could separate these optimisation philosophies. Operational systems preserved deterministic state transitions. Analytical platforms absorbed variability downstream.


AI Removes the Insulation

AI reduces that separation. Machine agents increasingly interact directly with live operational data rather than curated extracts. Inference engines that are inherently probabilistic now depend more directly on infrastructure expected to be exact. The probabilistic layer now relies on deterministic state in real-time.

This does not invalidate the cloud model, nor does it imply that systems of record do not belong there. It makes the architectural assumptions more visible. Uniform, statistical infrastructure interacts with non-uniform, deterministic state, and the optimisation choices at each layer become more tightly coupled.

Architects must therefore decide explicitly where determinism resides in their stack and what elasticity they are willing to trade to preserve it. Elasticity, isolation and correctness can coexist. But not infinitely, and never without cost.

Someone always pays for certainty.

The cloud is built on uniformity. Databases are built on authority. That authority carries operational consequence.

As AI tightens the feedback loop between them, trade-offs once hidden inside abstraction become explicit design decisions. And explicit design decisions are where responsibility ultimately resides.

This article is part of the Databases in the Age of AI series.

Why We Spent 20 Years Protecting Databases from Analytics (and Why AI Just Broke That Truce)

8:53am on a Monday

I want to take you back roughly twenty years.

It is 8:53am on a Monday morning in central London and I am sitting at my desk staring at the screen. My coffee is untouched. My palms are sweating.

The ETL job is still running.

If that phrase means nothing to you, it was enough in the mid-2000s to strike fear into the heart of any production DBA. At the time I was the database lead for a SaaS platform serving customers across the UK, most of whom would start logging in from 9am.

The weekend ETL process (Extract, Transform, Load) pulled data out of the operational database, reshaped it and pushed it into the enterprise data warehouse. It ran every weekend and it was supposed to finish long before Monday morning.

It had not.

CPU pinned. Redo logs churning. User sessions beginning to queue.

In a few minutes customers would start calling support. Support would start calling me. And somewhere upstairs, the CEO would notice that his dashboard was still showing last week’s numbers.

I had two options. Let it run and hope it finished before the system buckled under real user traffic. Or kill it and spend the next hour watching rollback potentially take even longer, while guaranteeing that the warehouse would be stale until the following weekend.

How did we end up building systems where that was a normal Monday morning dilemma?


Why We Built the Wall

For the better part of two decades, the industry answered that question in one consistent way: keep analytics away from operational systems.

Transactional databases were designed to process orders, update accounts and record events predictably. Analytical workloads were different. They were heavy, exploratory and often poorly constrained. They scanned large portions of data, built aggregates, joined everything to everything and consumed CPU and I/O in bursts that were difficult to forecast.

Putting the two together in the same system was a recipe for contention.

So we separated them.

We built ETL pipelines. We built data warehouses. Later, we built data lakes and lakehouses. We introduced replication, change data capture and streaming. Each innovation was, in its own way, an attempt to preserve the integrity of the system of record while still making data available for analysis.

This was not fashion. It was defensive architecture.

The separation protected revenue-generating systems from analytical curiosity. It provided workload isolation. It gave operations teams a fighting chance of keeping Monday morning uneventful.


It Was Never Just About Performance

Enterprise environments rarely have a single system of record. A CRM system holds customer interactions. An ordering platform tracks transactions. Billing lives somewhere else. Supply chain somewhere else again.

The warehouse became not just a safety valve for analytics, but a unifying layer. It was the place where disparate operational systems could be reconciled into something coherent.

For years, this model worked.

Dashboards were allowed to be slightly stale. Reports could reflect yesterday’s state. Humans tolerate delay. In fact, they often prefer it. Analysis takes time, and decisions are rarely made in milliseconds.

The truce held because the consumer was human.


AI Changes the Consumer

AI agents change that.

An AI agent does not log in at 9am. It does not wait for a dashboard refresh. It does not tolerate yesterday’s numbers if it is expected to act on what is true right now.

Inference is not reporting. It is decision execution at machine speed.

If an agent is recommending a next action, approving a transaction, adjusting a price or triggering a workflow, the freshness of the underlying data becomes materially important. Close enough is no longer good enough. Staleness is no longer cosmetic. It alters outcomes.

The architectural assumption that analytics can safely run on a delayed copy of operational data begins to fracture.

This does not mean warehouses were a mistake. It does not mean lakehouses are obsolete. It does not mean streaming pipelines were misguided.

It means they were optimised for a different consumer.

For years, we optimised for human analysis. Now we are increasingly optimising for machine-driven action.

That is a different problem.


The Balance of Trade-Offs

For two decades, the answer was clear: keep them apart.

Protect the system of record. Move the data. Analyse it somewhere else. Accept a little delay in exchange for stability and control.

That architecture was forged in moments exactly like that Monday morning at 8:53am, CPU pinned, redo logs churning, business users about to log in.

AI does not invalidate that history. It simply changes the balance of trade-offs.

The truce between operational databases and analytics was built for a world where humans consumed insight.

We are now entering a world where machines consume state.

And that changes the conversation.

This article is part of the Databases in the Age of AI series.

AI Doesn’t Read Dashboards… and That Changes Everything for Databases

A bank executive opens a fraud dashboard in Microsoft Power BI.

Losses by region, chargeback ratios, transaction velocity trends and a heatmap of anomalous activity. The numbers refresh within minutes. Data flows out of the system of record, is reshaped and aggregated, then presented for interpretation.

This is contemporary analytics: fast and operationally impressive. But it remains interpretive. It explains what is happening, while intervention occurs elsewhere – inside a fraud model embedded in the execution path, deciding in milliseconds whether money moves or an account is frozen.

Reporting systems describe what has already occurred. Even when refreshed every few minutes, they are retrospective. Inference systems are anticipatory. They evaluate the present in order to shape what happens next.

For two decades, enterprise data platforms were built around a deliberate separation between systems of record and analytical platforms. The system of record handled revenue-generating transactions; analytics operated on copies refreshed hourly or even every few minutes. Latency narrowed, but the boundary remained.

AI systems do not consume summaries, however fresh. They make decisions inside the transaction itself. A real-time fraud model does not want a recently refreshed extract; it requires the authoritative state of the business at the moment of decision. When automation replaces interpretation, data freshness becomes a decision integrity requirement. That shift changes the role of the database entirely.


Snapshots ≠ State

The difference is not batch versus real time. It is snapshot versus canonical state.

A snapshot is a materialised representation of state at a prior point in time – even if that point is only moments earlier. It may be refreshed frequently or continuously streamed, but it remains a copy. The system of record contains the canonical state of the enterprise – balances, limits, flags and relationships – reflecting the legal and financial truth when a transaction commits.

In fraud detection, that distinction is decisive. A dashboard can tolerate slight delay because its purpose is explanation. A model embedded in the execution path cannot. It must evaluate current balance, velocity and account status, not a recently materialised representation.

For years, we increased the distance between analytics and the system of record to protect transactional stability. That separation reduced risk in a world where insight followed action.

Automation reverses that order. Insight now precedes action. Once decisions are automated, the gap between a copy of the data and the authoritative source becomes consequential.


When Almost Right Is Still Wrong

If a fraud dashboard is slightly stale, an analyst may adjust a threshold next week. When a fraud model evaluates incomplete or delayed state, the error is executed immediately and repeated at scale.

False declines can lock customers out within minutes. False approvals can leak substantial losses before discrepancies surface in reporting. Automation compresses time and amplifies mistakes because there is no interpretive buffer.

Real-time intervention is inevitable. Competitive pressure and regulatory scrutiny demand it. But once decisions are automated, tolerance for architectural distance shrinks. A delay harmless in reporting can be material in a decision stream. A dataset “close enough” for analytics may be insufficient for automated intervention.

The risk is not that dashboards are wrong; it is that forward-looking systems may act on something almost right.


Databases in the Age of Intervention

When fraud detection becomes automated intervention rather than retrospective analysis, the requirements on the data platform change. Freshness is defined at the moment of decision, not by refresh intervals.

Replication patterns take on new significance. Asynchronous copies and downstream materialisations were designed to protect the system of record. They optimise scale and isolation, but every layer introduces potential lag or divergence. For reporting, that trade-off is acceptable. For automated decisions in revenue-generating workflows, it becomes risk.

Workload separation also looks different. When analytics is retrospective, distance protects performance. When inference is embedded in operational workflows, proximity to live transactional state matters. The challenge is enabling safe, predictable access without compromising correctness.

Fraud detection is simply the clearest example. Dynamic pricing, credit approvals, supply chain routing and clinical triage all follow the same pattern. The model is not generating a report about what happened; it is evaluating the present to influence what happens next.

For decades, enterprise architecture assumed intelligence followed events. As AI systems become anticipatory and automated, intelligence precedes action. The database is no longer simply the foundation of record-keeping.

It becomes part of how the future is decided – whether we are comfortable with that or not.

This article is part of the Databases in the Age of AI series.

Databases Were Built for Humans – AI Agents Change the Equation

For more than a decade, the industry has been preparing for a data explosion.

Zettabytes. Exponential curves. Hockey sticks on slides. Whether it was IDC’s DataSphere forecasts or countless vendor keynotes, the message was consistent: the amount of data created and stored worldwide was about to grow very, very fast.

And to be fair, that part largely went to plan.

Enterprises adapted. Storage scaled out. Cloud elasticity became normal. Analytical workloads were pushed away from systems of record. The industry did the work required to survive – and even thrive – in a world of exploding data volumes.

What almost nobody questioned, however, was a much quieter assumption baked into all of that planning.

The Assumption Nobody Revisited

All of those forecasts, explicit or implicit, assumed that the users of enterprise systems would remain human.

Humans are slow. Humans are bursty. Humans sleep.

Even power users have natural limits, predictable working patterns and an instinct for self-preservation when systems start pushing back. Entire generations of database design, connection management and capacity planning quietly depend on those characteristics.

It wasn’t a bad assumption. It was a reasonable one. Until it wasn’t.


A Step-Change, Not a Trend

What has changed is not just how much data exists, but who – or what – is accessing it.

AI agents introduce a new class of user into enterprise computing: non-human, machine-speed actors operating directly against application logic and data sources. This isn’t a continuation of an existing trend. It’s a step-change.

You’re not adding more users along the same curve. You’re changing the curve itself.

The data explosion was predicted. The user explosion – at least in this form – was not.

Why AI Agents Break Old Rules

AI agents don’t just behave like very enthusiastic humans.

They are fundamentally different:

  • Speed: they operate at machine speed, turning milliseconds into meaningful units of work
  • Relentlessness: they don’t pause, sleep or slow down unless explicitly forced to
  • Unpredictability: agentic workflows fan out, retry, amplify and cascade in ways humans never could

These aren’t “power users”. They’re closer to autonomous load generators.

When Agents Hit Systems of Record

Critically, AI agents don’t want last night’s report.

They want now.

That pulls them towards operational systems of record – the RDBMS platforms that were carefully protected for the last twenty years from exactly this kind of access pattern. Read replicas help, until they don’t. Caches help, until coherence matters. Copy lag becomes a business problem, not a technical detail.

The long-standing truce between OLTP and everything else is under strain.


Capacity Planning Enters the Chaos Zone

Traditional infrastructure planning assumes that tomorrow looks broadly like yesterday, just a bit bigger.

AI agents break that assumption.

Sudden workload spikes. Non-linear fan-out. Cost curves that move faster than budgeting cycles. Organisations are forced into an uncomfortable choice: over-provision aggressively and accept unpredictable cloud bills, or under-provision and risk outages in systems that now sit directly on critical decision paths.

Capacity planning stops being optimisation. It becomes risk management.

This Is Already Happening

None of this is theoretical.

Organisations are already talking openly about AI agents as part of their workforce – not as tools, but as actors performing work at scale.

Enterprises are comfortable counting tens of thousands of AI agents as “workers”, but it shouldn’t be surprising when those workers behave very differently to humans – and place very different demands on the systems beneath them.


The Equation Has Changed

The data explosion followed the forecast.

The explosion in users did not.

Databases were built for humans – slow, bursty, predictable ones – and that assumption shaped everything from architecture to cost models. AI agents don’t fit that mould… and pretending they do is how organisations drift into outages, runaway costs or both.

Databases were built for humans. AI agents didn’t get the memo – and they’re already in production.

This article is part of the Databases in the Age of AI series.

Inferencing Is a Database Problem Disguised as an AI Problem

I have a habit of becoming interested in technology trends only once they collide with reality. Flash memory wasn’t interesting to me because it was new – it was interesting because it broke long-held assumptions about how databases behaved under load.

Cloud computing wasn’t interesting to me because infrastructure became someone else’s problem. It became interesting when database owners started making uncomfortable compromises just to get revenue-affecting systems to run acceptably in the cloud. Compute was routinely overprovisioned to compensate for storage performance, leading to large bills for resources that were mostly idle. At the same time, “modernisation” began to feel less like an architectural necessity and more like a convenient justification for expensive consultancy services.

And now, just when I thought flashdba had nothing left to say, AI is following the same path.


We’ve Seen This Movie Before

For the last couple of years, most of the attention has been on training. Bigger models, more parameters, more GPUs, massive share prices. That focus made sense because training is visible, centralised and easy to reason about in isolation. But as inferencing starts to move up into the enterprise, something changes.

In the enterprise, inferencing stops being an interesting AI capability and starts becoming part of real business workflows. It gets embedded into customer interactions, operational decisions and automated processes that run continuously, not just when someone pastes a prompt into a chat window. At that point, the constraints change dramatically.

Enterprise inferencing is no longer about what a model knows. It is about what the business knows right now. And that is where things begin to feel very familiar to anyone responsible for systems of record.

Because once inferencing depends on real-time access to authoritative operational data, the centre of gravity shifts away from models and back towards databases. Latency matters. Consistency matters. Concurrency matters. Security boundaries matter. Above all, correctness matters.

This is the point at which inferencing stops looking like an AI problem and starts looking like what it actually is: a database problem, wearing an AI costume.


Inferencing Changes Once It Becomes Operational

While inferencing remains something that sits at the edge of the enterprise, its demands are relatively modest: a delayed response is tolerable… slightly stale data is acceptable. If an answer is occasionally wrong, the consequences are usually limited to a poor user experience rather than a failed business process.

That changes quickly once inferencing becomes operational. When it is embedded directly into business workflows, inferencing is no longer advisory… it becomes participatory. It influences decisions, triggers actions and – increasingly – operates in the same execution path as the systems of record themselves. At that point, inferencing stops consuming convenient snapshots of data and starts demanding access to live context data.

What is Live Context?

By live context, I don’t mean training data, feature stores or yesterday’s replica. I mean current, authoritative operational data, accessed at the point a decision is being made. Data that reflects what is happening in the business right now, not what was true at some earlier point in time. This context is usually scoped to a specific customer, transaction or event and must be retrieved under the same consistency, security and governance constraints as the underlying system of record. In other words, a relational database. Your relational database.

Live Context gravitates towards RDBMS systems of record. It does not appear spontaneously – it is created at the moment a business state changes: when an order is placed, a payment is authorised, an entitlement is updated or a limit is breached, that change becomes real only when the transaction is committed to the RDBMS. Until then, it is provisional.

Analytical platforms can consume that state later, but they do not create it. Feature stores, caches and replicas can approximate it, but they do so after the fact. The only place where the current state of the business definitively exists is inside the operational production databases that process and commit transactions.

As inferencing becomes dependent on live context, it is therefore pulled towards those databases. Not because they are designed for AI workloads, and certainly not because this is desirable, but because they are the source of truth. If an inference is expected to reflect what is true right now, it must, in some form, depend on the same data paths that make the business run.

This is where the tension becomes unavoidable.


Inferencing Is Now A Database Problem

Once inferencing becomes dependent on live context, it inherits the constraints of the systems that provide that context. Performance, concurrency, availability, security and correctness are no longer secondary considerations. They become defining characteristics of whether inferencing can be trusted to operate inside business-critical workflows at all.

This is why enterprise AI initiatives are unlikely to succeed or fail based on model accuracy alone. They will succeed or fail based on how well inferencing workloads coexist with production databases that were never designed, built or costed with AI in mind. At that point, inferencing stops being an AI problem to be delegated elsewhere and becomes a database concern that must be understood, designed for and owned accordingly.

This article is part of the Databases in the Age of AI series.

Databases Now Live In The Cloud

I recently stumbled across a tech news post which surprised me so much I nearly dropped my mojito. The headline of this article screamed:

Gartner Says the Future of the Database Market Is the Cloud

Now I know what you are thinking… the first two words probably put your cynicism antenna into overdrive. And as for the rest, well duh! You could make a case for any headline which reads “The Future of ____________ is the Cloud”. Databases, Artificial Intelligence, Retail, I.T., video streaming, the global economy… But stick with me, because it gets more interesting:

On-Premises DBMS Revenue Continues to Decrease as DBMS Market Shifts to the Cloud

Yeah, not yet. That’s just a predictable sub-heading, I admit. But now we get to the meat of the article – and it’s the very first sentence which turns everything upside down:

By 2022, 75% of all databases will be deployed or migrated to a cloud platform, with only 5% ever considered for repatriation to on-premises, according to Gartner, Inc.

Boom! By the year 2022, 75% of all databases will be in the cloud! Even with the cloud so ubiquitous these days, that number caused me some surprise.

Also, I have so many questions about this:

  1. Does “a cloud platform” mean the public cloud? One would assume so but the word “public” doesn’t appear anywhere in the article.
  2. Does “all databases” include RDBMS, NoSQL, key-value stores, what? Does it include Microsoft Access?
  3. Is the “75%” measured by the number of individual databases, by capacity, by cost, by the number of instances or by the number of down-trodden DBAs who are trying to survive yet another monumental shift in their roles?
  4. How do databases perform in the public cloud?

Now, I’m writing this in mid-2020, in the middle of the global COVID19 pandemic. The article, which is a year old and so pre-COVID19, makes the prediction that this will come true within the next two years. It doesn’t allow for the possibility of a total meltdown of society or the likelihood that the human race will be replaced by Amazon robots within that timeframe. But, on the assumption that we aren’t all eating out of trash cans by then, I think the four questions above need to be addressed.

Questions 1, 2 and 3 appear to be the domain of the authors of this Gartner report. But question 4 opens up a whole new area for investigation – and that will be the topic of this next set of blogs. But let’s finish reading the Gartner notes first, because there’s more:

“Cloud is now the default platform for managing data”

One of the report’s authors, long-serving and influential Gartner analyst Merv Adrian, wrote an accompanying blog post in which he makes the assertion that “cloud is now the default platform for managing data”.

And just to make sure nobody misunderstands the strength of this claim, he follows it up with the following, even stronger, remark:

On-premises is the past, and only legacy compatibility or special requirements should keep you there.

Now, there will be people who read this who immediately dismiss it as either obvious (“we’re already in the cloud”) or gross exaggeration (“we aren’t leaving our data centre anytime soon”) – such is the fate of the analyst. But I think this is pretty big. Perhaps the biggest shift of the last few decades, in fact.

Why This Is A Big Deal

The move from mainframes to client/server put more power in the hands of the end users; the move to mobile devices freed us from the constraints of physical locations; the move to virtualization released us from the costs and constraints of big iron; but the move to the cloud is something which carries far greater consequences.

After all, the cloud offers many well-known benefits: almost infinite scalability and flexibility, immunity to geographical constraints, costs which are based on usage (instead of up-front capital expenditure), and a massive ecosystem of prebuilt platforms and services.

And all you have to give up in return is complete control of your data.

Oh and maybe also the predictability of your I.T. costs – remember in the old days of cell phones, when you never exactly knew what your bill would look like at the end of the month? Yeah, like that, but with more zeroes on the end.

Over to Merv to provide the final summary (emphasis is mine):

The message in our research is simple – on-premises is the new legacy.  Cloud is the future. All organizations, big and small, will be using the cloud in increasing amounts. While it is still possible and probable that larger organizations will maintain on-premises systems, increasingly these will be hybrid in nature, supporting both cloud and on-premises.

The two questions I’m going to be asking next are:

  1. What does this shift to the cloud mean for the unrecognised but true hero of the data center, the DBA?
  2. If we are going to be building or migrating all of our databases to the cloud, how do we address the ever-critical question of database performance?

Link to Source Article from Gartner

Link to Merv Adrian blog post

Don’t Call It A Comeback

I’ve Been Here For Years…

Ok, look. I know what I said before: I retired the jersey. But like all of the best superheroes, I’ve been forced to come out of retirement and face a fresh challenge… maybe my biggest challenge yet.

Back in 2012, I started this blog at the dawn of a new technology in the data centre: flash memory, also known as solid state storage. My aim was to fight ignorance and misinformation by shining the light of truth upon the facts of storage. Yes, I just used the phrase “the light of truth”, get over it, this is serious. Over five years and more than 200 blog posts, I oversaw the emergence of flash as the dominant storage technology for tier one workloads (basically, databases plus other less interesting stuff). I’m not claiming 100% of the credit here, other people clearly contributed, but it’s fair to say* that without me you would all still be using hard disk drives and putting up with >10ms latencies. Thus I retired to my beach house, secure in the knowledge that my legend was cemented into history.

But then, one day, everything changed…

Everybody knows that Information Technology moves in phases, waves and cycles. Mainframes, client/server, three-tier architectures, virtualization, NoSQL, cloud… every technology seems to get its moment in the sun…. much like me recently, relaxing by the pool with a well-earned mojito. And it just so happened that on this particular day, while waiting for a refill, I stumbled across a tech news article which planted the seed of a new idea… a new vision of the future… a new mission for the old avenger.

It’s time to pull on the costume and give the world the superhero it needs, not the superhero it wants…

Guess who’s back?

* It’s actually not fair to say that at all, but it’s been a while since I last blogged so I have a lot of hyperbole to get off my chest.