There is a principle that every database professional understands, even if they have never articulated it in quite this way.
When a transaction commits, the database trusts that someone meant it.
Not in a philosophical sense. In a practical, operational sense. The entire structure of enterprise transaction design – approval workflows, four-eyes controls, authorisation hierarchies, audit requirements – rests on the assumption that a human being, somewhere in the chain, exercised judgment before that commit happened. The database did not need to verify this. It never had to. The human was always there.
This is becoming visible as agentic AI systems begin to operate directly against systems of record.
ACID guarantees atomicity, consistency, isolation and durability. It guarantees that what you commit is complete, correct relative to the schema, isolated from competing operations and permanent. It does not guarantee that the thing you committed was something anyone should have committed. That was never the database’s problem to solve.
It was the human’s problem. And for fifty years, humans solved it.
What ACID Was Never Asked to Do
The relational transaction model emerged in a world where every database write originated with a deliberate human act. A purchase order entered by a procurement clerk. An inventory adjustment approved by a warehouse manager. A payment authorised by a finance controller. Between the business decision and the commit, there was always a human who understood what they were doing and why.
This created an implicit guarantee that ACID was never asked to provide: the guarantee that the commit reflected a considered, authorised intention.
Database engineers built the four ACID properties to solve technical problems – preventing partial updates, maintaining referential integrity, handling concurrent access, surviving hardware failures. These are real and difficult problems, and the relational model solves them well.
But technical correctness was never the same thing as intentional correctness.
Technical correctness means the transaction left the database in a consistent state. Intentional correctness means the transaction reflected a decision that someone authorised and understood.
For five decades, these were the same thing in practice. A technically correct commit was an intentionally correct commit, because no commit happened without a human in the loop.
Databases were built for humans. Agentic AI changes the equation.
The Problem Is Not Malice. It Is Drift.
AI agents are not attacking systems of record.
They are operating as they were designed to operate (retrieving data, evaluating context, executing actions, committing results). In agentic AI systems, the problem is not that agents behave incorrectly relative to their instructions. The problem is the gap between what an agent was instructed to do and what a human would have authorised.
Consider a straightforward scenario: an AI agent managing supplier payments detects what it classifies as a duplicate invoice and suppresses the payment. The agent has followed its instructions. The transaction it commits – or the one it prevents – is technically correct. The database reflects a consistent state. ACID is satisfied in every respect.
But the invoice was not a duplicate. It was a resubmission for a partially fulfilled order that the agent’s context window did not capture. A human reviewing the same situation would have recognised the pattern immediately, or asked a question before deciding.
The agent did not ask. Agents generally do not ask. Asking is slow, and reducing the need for human intervention is the point.
The commit happened. The database state changed. The business event became real. Nobody authorised it.
This is not a failure mode in the traditional sense. Nothing broke. No constraint was violated. The audit log records a valid transaction, committed by an agent with appropriate credentials, against an account that existed, in an amount that was permitted by policy.
What the audit log cannot record is that the underlying decision was wrong.
The Commit Still Defines Reality
The Moment When Intent Became Implicit
To understand why this exists, it helps to be precise about when the assumption entered the architecture.
As Article 7 of this series argued, the commit is the moment when intent becomes fact. Before that moment, a business decision is a recommendation, a proposal, a signal. After it, the decision is real – legally, financially and operationally.
But the architecture that surrounds the commit was built to verify what was committed, not who decided to commit it or why. Access controls verify identity and permission. Constraints verify schema conformance. Triggers verify referential relationships. Transaction logs record what changed and when.
None of these mechanisms capture the reasoning chain that produced the commit.
In the human era, that was fine. The reasoning chain was in the human’s head. If the commit was questionable, you could ask the person who made it. The audit trail led back to a human who could be held accountable and who, in most cases, could explain their reasoning.
With agents, the reasoning chain is distributed across a model’s context window at the moment of execution – and that context window is gone when the transaction ends.
The database records what happened. The reasoning behind it evaporates.
What the Database Sees
The failure modes that produce this gap vary in character. A language model can misread a retrieved record or act on information it inferred rather than read, committing a well-formed transaction that reflects a decision it effectively invented. In multi-agent architectures, instructions arrive from orchestrating agents rather than humans, and each handoff introduces drift between the original intent and the action that finally reaches the database. Agents operating at machine speed can commit hundreds of transactions before any of them surface in a monitoring dashboard, closing the oversight window faster than controls designed for human timescales can operate.
What these failure modes share is that the database is structurally blind to all of them.
This is not a criticism of the relational model. It is a description of what the relational model was designed to do. The database was designed to record state changes that originated from trusted, authorised actors. It has always assumed that the trust and authorisation frameworks existed outside itself – in the application tier, in the human decision-making layer, in the organisational controls that preceded the commit.
Those external controls are now increasingly operated by systems whose reasoning the database cannot examine.
A transaction committed by an agent with valid credentials, against a permitted resource, within the constraints of its defined permissions, will be indistinguishable in the transaction log from a transaction committed by a human who reviewed the situation carefully and made a deliberate decision.
The database cannot tell the difference. It was never built to.
What was always a reasonable design assumption (that every commit arrived carrying human judgment) has become an architectural gap.
The Gap Has a Name, but Not Yet an Architecture
The industry is beginning to reach toward this problem.
Terms like intent auditing and agent identity layers are starting to appear. The idea that agent actions should carry verifiable provenance – a record not just of what was committed but of the reasoning context in which the commitment was made – is beginning to surface in architectural discussions and early product thinking.
The principle is sound. If the relational model assumes that commits reflect authorised intent, and that assumption is no longer reliable by construction, then something external to the database needs to restore it. Not by slowing agents down to human speed, but by creating a record of the reasoning chain that the database cannot currently capture.
Put simply: intent is now a first-class concern in data systems. The honest answer today is that this architecture does not yet exist in mature form. The controls enterprises are deploying now (credentialing agents as service accounts, scoping their permissions tightly, logging their operations through existing audit infrastructure) are the right immediate responses. They solve the identity problem. They do not solve the intent problem.
An agent with correctly scoped permissions can still commit transactions that no human would have authorised, for reasons no audit log currently captures.
That gap is not closing on its own. It is widening, at the pace agents are being deployed into production.
The relational transaction model has been the foundation of enterprise data integrity for fifty years. It is not failing. It is working exactly as designed.
The problem is that it was designed for a world where human judgment was assumed to be present at every commit, and it has no mechanism to detect when that assumption is no longer being met.
The commit is still the moment reality is made. AI agents are reaching that moment faster, in greater volume, and with less human oversight than any model of enterprise transaction integrity was built to handle.
The database will record what happened. Faithfully and durably. Correct in every technical sense.
What it cannot record is whether anyone should have let it happen.
That is the problem this phase of the series is here to examine. The answer is not yet clear.
This article is part of the Databases in the Age of AI series.