Last updated: May 2026
A PostgreSQL MCP server is a service that exposes PostgreSQL databases as tools an AI agent can call through the Model Context Protocol (MCP). Rather than giving an LLM direct database credentials, you put an MCP server between the agent and the database. The agent discovers what queries it can run, calls them as named tools, and the MCP server translates those calls into safe, governed SQL against PostgreSQL.
The pragmatic path to a production-grade PostgreSQL MCP server is built on a foundation that already exists in most modern environments: an auto-generated REST API over the PostgreSQL schema. Tools like DreamFactory have been doing exactly this for over a decade — connect a database, get a fully documented, role-protected REST API in minutes, no code written. The MCP server is then a thin surface on top of that same generated API. DreamFactory's framing: "turn your database into an auditable, governed API layer for AI" — the API layer is the foundation; the AI surface is the addition. This guide covers what a PostgreSQL MCP server actually is, the three common ways to set one up, the security controls a production deployment needs, and how the auto-generated-API-plus-MCP pattern handles the governance.
The Model Context Protocol is an open standard, introduced by Anthropic and rapidly adopted by other LLM providers, that defines how AI models discover and call external tools at runtime. An MCP server exposes tools (callable functions with typed parameters and semantic descriptions) and resources (readable data). An MCP client — Claude Desktop, Claude Code, the Claude API with MCP enabled, ChatGPT, Cursor, Windsurf, or any other MCP-capable agent — consumes those tools.
As DreamFactory describes the pattern: MCP gives teams "a single reusable port for multiple AI assistants" rather than bespoke integrations per LLM platform. The same MCP-exposed PostgreSQL tools work across every MCP-capable client.
For PostgreSQL specifically, MCP solves a problem that has dogged AI integration since LLMs got tool-use: how to expose data to an agent safely. Direct database credentials are too dangerous; bespoke integrations don't scale. MCP gives you a structured middle ground — named tools, semantic descriptions, type-safe parameters — that constrains what the agent can do while keeping the developer experience simple.
Before discussing MCP-specific patterns, it's worth being explicit about the layer underneath. Every production MCP server for PostgreSQL is, in practice, a REST API for PostgreSQL plus a thin MCP surface on top. The MCP server exposes tools; those tools resolve to HTTP calls; the HTTP calls hit a REST API that talks to PostgreSQL.
This is where DreamFactory's history matters. The product has been auto-generating REST APIs from databases for over a decade. Connect to a PostgreSQL database, and DreamFactory:
That REST API is useful by itself — human developers integrate with it, mobile apps consume it, internal tools depend on it. The MCP server is the next surface for the same generated endpoints. One configuration; two surfaces. As DreamFactory's CTO Kevin McGahey demonstrates in the platform's MCP tutorial, the entire pipeline — database credentials in a config file, MCP call, auto-generated API, read-only role, API key, working dashboard built by Claude Code — completes in minutes.
A PostgreSQL MCP server is an MCP server whose tools wrap PostgreSQL queries — typically via the auto-generated REST API described above. Instead of exposing a generic "run SQL" capability (high risk), a well-designed PostgreSQL MCP server exposes named, scoped, semantically meaningful tools — for example, "get_customer_by_email", "list_recent_orders", "compute_monthly_revenue".
The agent reads the tool descriptions, picks the right one for the user's question, and calls it with structured parameters. The MCP server resolves the call to the matching REST endpoint, which translates it into a safe, parameterised SQL query against PostgreSQL and returns the result. The agent never writes or sees raw SQL.
This matters because LLMs are non-deterministic. The same user request can produce dozens of slightly different SQL queries, some of which might be wasteful, expensive, or unsafe. Constraining the model to an allow-listed set of tools turns an unpredictable SQL-generation problem into a predictable tool-selection problem — far easier to audit, govern, and trust. DreamFactory's framing for this principle is "containment by design" — every AI action is bounded by the policies you set.
The use cases break down into three categories:
The common thread: human work that previously required either custom dashboards, manual SQL, or hard-coded integration code, now done by an LLM that calls a constrained, governed set of tools against your database. DreamFactory's customers using this pattern include ExxonMobil, Toyota, Saint-Gobain, PPG, Deloitte, Google Cloud, AkerBP, Netgear, Miller Industries, the National Institutes of Health, the Vermont Agency of Transportation, D.A. Davidson, and Pillsbury Law.
| Approach | Best for | What gets generated | Time to set up | Production-ready? |
|---|---|---|---|---|
| Anthropic's official postgres-mcp-server | Prototyping, exploration, demos | Read-only MCP server with parameterised query tool | ~10 minutes | No — needs hardening |
| Custom MCP server (TypeScript / Python SDK) | Specialised tools, custom logic, niche control | Whatever you build | Days to weeks | Yes if you build the controls |
| DreamFactory AI Data Gateway | Production, multi-tenant, regulated, enterprise | Full REST API over PostgreSQL, OpenAPI-documented, role-protected, rate-limited, audit-logged — plus an MCP server exposing the same endpoints to AI agents | Minutes | Yes — designed for it |
The simplest starting point. Anthropic publishes a reference implementation in the modelcontextprotocol/servers repository. It takes a PostgreSQL connection string and exposes a small set of read-only tools. Configure Claude Desktop to launch it, and you're querying your database from within a Claude conversation in under fifteen minutes.
The trade-offs are deliberate. The official server is meant as a reference, not a hardened production component. It runs with the credentials you give it (no per-user identity), it doesn't enforce query allow-listing beyond read-only, it has no built-in audit log, and it expects you to handle network exposure and TLS yourself. For prototyping, that's fine. For anything touching production data, layer additional controls on top — or skip to one of the next two approaches.
The MCP SDKs (TypeScript and Python are the most mature) let you build a custom server from scratch. You define each tool, write the underlying query logic, and ship the result as a binary or container.
This makes sense when you want very specific, opinionated tools: "get_customer_lifetime_value", "find_similar_orders", "summarise_recent_activity_for_user". You're trading setup time for control. Plan on several days of engineering for a tightly-scoped server with a handful of tools, plus the ongoing work of building authentication, query governance, observability, and an audit log around it.
DreamFactory's AI Data Gateway takes the API-generation foundation the product has had for years and adds the MCP surface on top. Connect it to a PostgreSQL database and:
For most enterprise deployments — especially anything regulated — this is the path of least resistance to a production deployment.
The request-response flow for a typical query:
start_date and end_date.The agent has its own credentials, but the audit trail and access decisions are driven by the calling human user. DreamFactory's RBAC layer authenticates the user via Entra ID, OAuth 2.0, SAML, LDAP, or Active Directory, then enforces per-table, per-endpoint, per-HTTP-verb access. PostgreSQL's role-based grants and row-level security policies remain in effect at the database layer.
Don't expose a generic "run any SQL" tool. Expose named tools that represent specific business operations. DreamFactory's auto-generated REST API gives you this by default — each endpoint is a specific operation; nothing outside the configured list is exposed.
Pair PostgreSQL's row-level security with field-level redaction at the DreamFactory gateway. PostgreSQL RLS handles row-level filtering at the database layer; DreamFactory's per-role redaction handles column-level masking at the API layer. Both work together.
Every interaction logged: user prompt, agent's tool selection, parameters passed, resolved SQL, rows returned (or their count), and any redactions applied. DreamFactory captures user, timestamp, and payload for every API and MCP call.
For sensitive data, run the MCP server on-premises or in a VPC alongside PostgreSQL — never sending raw rows to public LLM providers. DreamFactory deploys via Linux installer, Docker, or Kubernetes (Helm chart), with air-gapped configurations supported. Healthcare, financial services, and government workloads typically require this.
DreamFactory's foundational capability is instant REST API generation: connect a database, get a complete, documented, role-protected REST API in minutes — no code written. The platform has been doing this for over a decade across 40+ supported data sources (SQL, NoSQL, cloud storage, SOAP services). Customers like ExxonMobil, Toyota, Saint-Gobain, PPG, Deloitte, Google Cloud, AkerBP, Netgear, Miller Industries, the National Institutes of Health, the Vermont Agency of Transportation, D.A. Davidson, and Pillsbury Law use DreamFactory as their data layer.
The AI Data Gateway is the newer chapter — an MCP server built into the platform that exposes the same auto-generated REST API to AI agents (Claude Desktop, Claude Code, ChatGPT, Cursor, Windsurf). The company describes it as "an auditable, governed API layer for AI": keep PostgreSQL intact, generate a governed API on top, and let AI agents query through that layer rather than receiving raw data extracts.
For a PostgreSQL deployment specifically, the platform provides:
G2 rates DreamFactory 4.7 with badges for "Easiest to Use," "Fastest Implementation," and "Best ROI." See the DreamFactory AI page or request a demo.
A PostgreSQL MCP server is a service that exposes a PostgreSQL database as tools an AI agent can call through the Model Context Protocol (MCP). The MCP server wraps queries as named tools with semantic descriptions; an MCP client like Claude reads the available tools and invokes them at runtime — letting the AI agent query PostgreSQL without writing raw SQL.
DreamFactory's foundational capability is instant REST API generation: connect a database (PostgreSQL, MySQL, SQL Server, Oracle, Snowflake, Databricks, MongoDB — 40+ supported), and the platform auto-generates a fully documented, role-protected REST API in minutes, with no code written. Authentication, RBAC, rate limiting, audit logging, and OpenAPI documentation are built in. The AI Data Gateway and MCP server sit on top of this same auto-generated API — agents call the same governed endpoints human developers would.
Yes. DreamFactory has been an API generation platform for over a decade and is widely deployed for use cases that have nothing to do with AI — modernising legacy systems, syncing enterprise data, and exposing data products inside large enterprises. MCP support is the newest surface; the REST API generation is the foundation.
Configure an MCP server that connects to your PostgreSQL database, then configure Claude Desktop, Claude Code, or the Claude API with MCP enabled to launch it. For prototyping, Anthropic's official postgres-mcp-server is the fastest path; for production, DreamFactory auto-generates a fully documented, secure REST API from your PostgreSQL schema in minutes and exposes the same endpoints to MCP clients via its built-in MCP server.
It's a useful starting point for prototyping but not production-ready out of the box. The official example provides read-only query access but lacks several controls enterprise teams need: identity passthrough, query allow-listing, field-level redaction, and audit logging. For production, layer these on top — or use DreamFactory, which provides them as part of the auto-generated API.
DreamFactory authenticates the calling user via any of OAuth 2.0, OpenID Connect, Azure AD / Entra ID, SAML 2.0, LDAP, or Active Directory. The user's identity drives DreamFactory's RBAC decision at the gateway. PostgreSQL's own row-level security policies continue to apply to queries DreamFactory issues, so PostgreSQL-native controls and gateway-level controls stack.
A REST API exposes resources via HTTP methods on URLs — designed for human developers and code-driven clients. An MCP server exposes capabilities specifically for AI agents: tools have semantic descriptions optimised for LLM reasoning, type-safe parameters, and a discovery mechanism. MCP servers often wrap underlying REST APIs — which is exactly what DreamFactory does: the same auto-generated REST endpoints are exposed via both surfaces.
Five essential controls: (1) Don't expose raw SQL — use allow-listed named tools backed by the auto-generated REST endpoints. (2) Use identity passthrough so the agent inherits user permissions. (3) Apply row- and column-level access controls (PostgreSQL's row-level security stacks with DreamFactory's gateway RBAC). (4) Log every prompt, query, and response for audit. (5) Run on-premises or in a VPC. See Zero Trust for LLMs.
Technically yes, but it's the highest-risk configuration. Most production deployments restrict MCP-exposed PostgreSQL access to reads only. If writes are required, isolate them behind explicit confirmation and constrain to allow-listed operations on specific tables. DreamFactory's RBAC per HTTP-verb makes this configuration trivial — allow GET, refuse PUT/POST/DELETE for the AI role.
Yes. DreamFactory's regulated-industry customers include the National Institutes of Health (government health research, on-prem SQL Server), Vermont Agency of Transportation (air-gapped Oracle and IBM mainframe), D.A. Davidson (financial services), and Pillsbury Law (legal). All run DreamFactory in production with role-based access control, audit logging, and either on-prem or air-gapped deployment.