Pick any enterprise database deployment from the last thirty years and ask the question that nobody explicitly asked at design time: who is this system built for?
The answer is embedded in every layer of the architecture. Role-based access control organised around job functions. Audit logs capturing usernames tied to employment records. Connection pools sized for the number of people plausibly logged in at once. Session management built around the expectation that someone, somewhere, is sitting at a terminal. Privilege escalation procedures that assume a named employee is requesting elevated access for a documented reason. Even the compliance frameworks that govern these systems – SOX, GDPR, HIPAA, PCI-DSS – were written in a world where every database action had a human author.
Enterprise database identity is not a technical abstraction. It is a social contract written into code. And every term of that contract assumes the other party is a person.
The Service Account Approximation
The challenge is not that organisations are unaware of non-human access. Service accounts have existed for decades. Application tiers authenticate to databases using credentials that represent a system rather than a person. Batch jobs run under dedicated identities. ETL pipelines, monitoring agents, replication processes – all of these operate without a human typing a password, and enterprise database administration has long since developed conventions for handling them.
But those conventions were built around a specific model of machine access: predictable, bounded and purpose-specific. A service account for a nightly billing job does a known set of things on a known schedule – and nothing else. Its blast radius (the scope of damage if something goes wrong) is understood. Privilege is granted accordingly. The audit entry for its actions is meaningless without context, but that context exists in the design documentation, the deployment runbook, the change management ticket that authorised it.
This is the model that AI agents inherit when they connect to a database. They authenticate as a service account, or as an application layer credential, or – most concerning – under the delegated identity of the human user whose session the agent is acting within.
The database sees a familiar identity. It issues familiar grants. It records familiar audit entries.
What it cannot see is that the reasoning behind the actions is no longer predictable, no longer bounded and no longer reviewable before the commit.
The Identity Gap
The problem runs deeper than authentication. Authentication is the mechanism by which a database establishes who – or what – it is talking to. Most agent deployments pass this test without friction. The agent presents a valid credential. The database accepts it. From the database’s perspective, the session looks legitimate because it is legitimate, in the narrow technical sense.
What databases have never been designed to represent is the distinction between an authenticated principal and the reasoning entity that instructed that principal to act.
When a human logs into a database, the authenticated identity and the reasoning entity are the same thing. The DBA connecting to the production schema is the person with the job title, the access rights, the accountability and the judgment.
Identity and intent are co-located.
An agent collapses that co-location. The authenticated identity might be a service account owned by a platform team. The reasoning entity is a language model operating on behalf of a user in a different department, acting on instructions from an orchestration layer, possibly in response to inputs that no human has reviewed. The database sees the service account. It sees the queries and writes that account generates. It has no concept of the chain of instruction that produced those actions and no mechanism to represent that chain in any data structure it maintains.
The equation changed. The database’s identity model did not.
What the Audit Log Cannot Say
Consider what an audit log entry actually records: the timestamp, the authenticated user, the object accessed, the operation performed, the outcome.
This is, in principle, everything you need to reconstruct what happened. In a world of human actors, it frequently is. You can find the person, establish their role, review their authorisation and determine whether the action was appropriate. The audit log is the foundation of every compliance investigation, every incident post-mortem, every regulatory response.
An agent acting under a shared service account credential produces audit entries that are technically complete and operationally useless for this purpose.
You cannot find the person, because the person did not directly perform the action. You can find the human who prompted the agent – perhaps – if the observability tooling upstream has captured that chain. You cannot determine from the database record alone whether the action reflected authorised intent, a misinterpreted instruction, a hallucinated directive or the downstream consequence of another agent’s decision in a different system. The commit is the moment reality is made, and the database records that moment faithfully. It records nothing about what preceded it.
This is not a logging configuration problem. It is a conceptual gap between what the audit model was designed to capture and what agent-driven action actually produces.
Governance Without a Subject
The compliance and governance frameworks that enterprise database teams live under share a common structural assumption: that accountability can always be traced to a named individual. Data access policies specify who can see what. Regulatory requirements mandate that sensitive operations be attributable to an authorised person. Incident response procedures begin with identifying the actor.
The separation between technical and intentional correctness makes this worse: even if you wanted to trace the intent behind a given commit, the database’s identity model gives you no path to do so. The agent is invisible to the system of record as a first-class actor. There is no field for it. There is no concept for it. It passes through the identity layer by borrowing someone else’s identity and leaves behind only the record of what that borrowed identity did.
Resource governance faces the same structural failure. Query resource management, workload prioritisation, connection limits – all are organised around principals whose behaviour is modelled and whose resource consumption is expected. An agent operating under a shared credential contributes its load to the profile of that credential with no separation. When the agent begins behaving unexpectedly – generating the compression and recursion patterns that define agent load – the database has no mechanism to isolate, attribute or govern that behaviour at the agent level. It sees only a familiar account doing unfamiliar things.
The Organisational Dimension
There is a final layer to this that sits beyond architecture. The people deploying AI agents are typically not the people who own the systems those agents are acting on. A business unit deploys an AI workflow. That workflow authenticates to a database owned by a platform team operating under its own governance framework. The platform team’s identity model was designed to manage the principals it knows about – its own service accounts, its own users, its own applications. It was never designed to become the identity infrastructure for AI systems built and deployed by other parts of the organisation, under different ownership, operating under different authorisation assumptions. The gap is not just technical. It is jurisdictional.
Where the Industry Is Looking
The direction of travel is visible. Machine identity frameworks are emerging – the conversation at Gartner’s IAM Summit earlier this year centred on the widening definition of “identity” and the operational challenge of governing systems that can no longer be seen clearly. Agent identity registries are being discussed – recent research from the Cloud Security Alliance found that only 23% of organisations have a formal strategy for agent identity management, and fewer than half felt confident they could pass a compliance review focused on agent behaviour. The concept of verifiable agent provenance – a way to assert, at the point of database access, not just which credential is presenting but which agent, acting on whose authority, with what instruction history – is starting to appear in architecture conversations.
But none of this exists at the database layer in any mature form. The database vendors have not shipped it. The standards bodies have not defined it. The compliance frameworks have not required it.
What exists today is a set of workarounds: service accounts with narrowed privileges, application-layer proxy systems that attempt to enforce agent-specific policies before requests reach the database, observability tooling that tries to reconstruct agent identity from upstream signals and inject it into audit pipelines. These are reasonable responses to an immediate problem. They are not solutions to the underlying one.
The underlying problem is that enterprise databases were built around an actor model that no longer describes the full population of actors using them. Until that model expands to accommodate agents as first-class principals (with their own identity, their own audit trail, their own resource governance surface) the systems of record at the centre of every enterprise will continue to operate with a fundamental blind spot.
They will know what happened. They will not know who – in any meaningful sense – made it happen.
This article is part of the Databases in the Age of AI series.













