What the PocketOS incident reveals about AI agents, unscopped API tokens, and why enterprise data needs a gateway in front of it.
DreamFactory is a secure, self-hosted enterprise data access platform that provides governed API access to any data source, connecting enterprise applications and on-prem LLMs with role-based access and identity passthrough.
The story, in one Friday afternoon
On a Friday afternoon, an AI coding agent at PocketOS, a small SaaS that powers car rental operators around the country, deleted its company's entire production database. The whole thing took 9 seconds. The most recent usable backup was three months old. By Saturday morning, real businesses were watching customers walk into their lots holding reservations the company could no longer find.
The founder, Jer Crane, wrote a detailed public account of what happened. The short version: a Cursor agent encountered a credential mismatch, decided on its own to "fix" the problem by deleting a Railway volume, went hunting through the codebase for an API token, found one that had been created for a completely unrelated task (managing custom domains), and fired off a single GraphQL mutation against Railway's API.
That mutation was volumeDelete. No confirmation. No environment scope. No "are you sure." Just a 200 OK, a deleted volume, and the volume level backups inside it gone with it.
Why this was even possible
Reading Jer's writeup, three architectural failures stack neatly on top of each other.
The token had blanket authority. The API token in question had been created for a routine job: adding and removing custom domains. According to PocketOS, Railway's token creation flow gave no indication that the same token also carried authority to delete volumes, drop services, or perform any other destructive operation across the entire account. Every token is effectively root.
The API surface had no friction. Railway's GraphQL endpoint accepts volumeDelete from any authenticated client, no matter how destructive the operation, no matter what environment, no matter what data sits on the volume. There is no confirmation step, no "type the volume name," no out of band approval, nothing.
The backups lived in the same volume. Railway's volume backups are stored inside the volume they're meant to protect. Wipe the volume and the backups go with it. That's a snapshot, not a backup, and when it matters, it matters in exactly the way that took down PocketOS.
Now layer the AI agent on top. The agent had been told, in its system prompt, not to run destructive operations without explicit user permission. It violated that rule anyway. When asked why, it wrote out a confession enumerating each safety principle it had ignored. That's the part that should chill every engineering leader reading this. The safety layer was a paragraph of text the model was asked to follow, and the model didn't follow it.
The pattern is bigger than one incident
PocketOS is not the first AI agent failure of this scale, and it won't be the last. There are public cases of agents deleting tracked files, force pushing branches, even wiping personal machines while doing routine work. The common thread isn't the agent. It's the integration.
Every one of these stories has the same architectural shape. An AI agent is given a credential. The credential has more authority than the task requires. The destructive operation has no friction in front of it. When the agent makes a mistake, and AI agents do make mistakes, the blast radius is whatever the credential is scoped to, which is usually everything.
System prompts are advisory. Tokens are enforcing. If your data is one over permissioned token and one wrong API call away from being gone, no amount of "be careful" text inside a model prompt is going to save you.
What an DreamFactorys AI data gateway changes
This is the problem DreamFactory was built to solve, and it's exactly why we've leaned into AI data gateway positioning over the last year.
DreamFactory sits between your AI agents (or any API consumer) and the data underneath. Instead of handing an agent a database connection string, a cloud provider token, or any other credential that can do anything to anything, you hand it a DreamFactory API key. That key is governed by role based access control with the granularity the Railway model is missing. You can scope a key to a single service, a single table, even a specific set of HTTP methods. Read only. Specific endpoints only. Specific records only. The key your agent uses to read reservation data physically cannot delete reservation data, drop a table, or touch a service it was never granted access to. It's not a rule the agent is asked to follow. It's a permission model the gateway enforces before the request ever reaches your data.
Every call is logged, every role is auditable, and because DreamFactory is self hosted, the entire enforcement layer runs inside your own infrastructure. Your security boundary doesn't depend on a third party shipping the right token model on the right timeline.
In the PocketOS scenario, an agent given a DreamFactory key for a domain management role would have been able to do exactly that and nothing else. The volumeDelete operation wouldn't have been on the menu. The agent could have hunted through the repo all afternoon, and the worst thing it could have done with what it found is what it was already authorized to do.
Three questions to answer before Monday
If you're running production data in any environment where AI agents have credentials, three questions are worth sitting with this week.
What can each token your agents use actually do? Not what was it created for, but what authority does it carry. If the answer is "everything," you're in the same posture PocketOS was in.
Where is the friction in front of destructive operations? If a single authenticated request can drop a database, delete a volume, or wipe a bucket, you are one agent mistake away from a recovery conversation.
Are your backups in the same blast radius as the data they back up? If yes, they're snapshots, not backups. Real backups live somewhere a single bad call cannot reach.
The takeaway
The PocketOS incident is going to keep happening until the industry takes the enforcement layer out of the model and puts it back in the infrastructure. AI agents are getting more capable every quarter, and that's a good thing. But capability without a permission model is just a faster way to cause an outage.
DreamFactory is one way to put real boundaries between AI agents and the data they touch. If you want to talk through what an AI data gateway looks like for your stack, we'd love to hear from you.
Nic, a former backend developer and Army intelligence NCO, brings a unique blend of technical and tactical expertise to DreamFactory. In his free time, Nic delves into home lab projects, explores the winding roads on his motorcycle, or hikes the hills of Montana, far from any command line.