back arrow Blog
PostgreSQL MCP Server: Setup, Security & Best Practices for AI Agents

PostgreSQL MCP Server: Setup, Security & Best Practices for AI Agents

RECOMMENDED ARTICLES

Last updated: May 2026

A PostgreSQL MCP server is a service that exposes PostgreSQL databases as tools an AI agent can call through the Model Context Protocol (MCP). Rather than giving an LLM direct database credentials, you put an MCP server between the agent and the database. The agent discovers what queries it can run, calls them as named tools, and the MCP server translates those calls into safe, governed SQL against PostgreSQL.

The pragmatic path to a production-grade PostgreSQL MCP server is built on a foundation that already exists in most modern environments: an auto-generated REST API over the PostgreSQL schema. Tools like DreamFactory have been doing exactly this for over a decade — connect a database, get a fully documented, role-protected REST API in minutes, no code written. The MCP server is then a thin surface on top of that same generated API. DreamFactory's framing: "turn your database into an auditable, governed API layer for AI" — the API layer is the foundation; the AI surface is the addition. This guide covers what a PostgreSQL MCP server actually is, the three common ways to set one up, the security controls a production deployment needs, and how the auto-generated-API-plus-MCP pattern handles the governance.

Quick summary

  • What an MCP server does: Exposes PostgreSQL as MCP tools (named functions) an AI agent can call.
  • What sits underneath: A REST API over the PostgreSQL schema. The MCP tools are usually just the same REST endpoints exposed via MCP rather than HTTP.
  • Three approaches: Anthropic's official postgres-mcp-server (prototyping), a custom MCP server (full control), or an AI Data Gateway like DreamFactory (auto-generated REST API + MCP server, production-ready).
  • Most common client: Claude Desktop, Claude Code, ChatGPT, Cursor, Windsurf — any MCP-capable client.
  • Production must-haves: identity passthrough, query allow-listing, field-level redaction, audit logging, on-premises/VPC deployment for sensitive data.
  • Brand principle that matters here: "Data stays put." DreamFactory's gateway model means PostgreSQL records never leave your perimeter.

What is the Model Context Protocol (MCP)?

The Model Context Protocol is an open standard, introduced by Anthropic and rapidly adopted by other LLM providers, that defines how AI models discover and call external tools at runtime. An MCP server exposes tools (callable functions with typed parameters and semantic descriptions) and resources (readable data). An MCP client — Claude Desktop, Claude Code, the Claude API with MCP enabled, ChatGPT, Cursor, Windsurf, or any other MCP-capable agent — consumes those tools.

As DreamFactory describes the pattern: MCP gives teams "a single reusable port for multiple AI assistants" rather than bespoke integrations per LLM platform. The same MCP-exposed PostgreSQL tools work across every MCP-capable client.

For PostgreSQL specifically, MCP solves a problem that has dogged AI integration since LLMs got tool-use: how to expose data to an agent safely. Direct database credentials are too dangerous; bespoke integrations don't scale. MCP gives you a structured middle ground — named tools, semantic descriptions, type-safe parameters — that constrains what the agent can do while keeping the developer experience simple.

The foundation: auto-generated REST APIs over PostgreSQL

Before discussing MCP-specific patterns, it's worth being explicit about the layer underneath. Every production MCP server for PostgreSQL is, in practice, a REST API for PostgreSQL plus a thin MCP surface on top. The MCP server exposes tools; those tools resolve to HTTP calls; the HTTP calls hit a REST API that talks to PostgreSQL.

This is where DreamFactory's history matters. The product has been auto-generating REST APIs from databases for over a decade. Connect to a PostgreSQL database, and DreamFactory:

  • Introspects the PostgreSQL schema (tables, views, stored procedures, relationships, JSON columns)
  • Generates a complete REST API with CRUD operations, filtering, sorting, pagination, and bulk operations
  • Generates OpenAPI / Swagger documentation automatically
  • Wraps every endpoint in role-based access control at the table, endpoint, and HTTP-verb level
  • Applies authentication (Entra ID, OAuth 2.0, OpenID Connect, SAML 2.0, LDAP, AD, API keys)
  • Enforces rate limits per user, role, or service
  • Logs every call with calling user, timestamp, and payload for audit

That REST API is useful by itself — human developers integrate with it, mobile apps consume it, internal tools depend on it. The MCP server is the next surface for the same generated endpoints. One configuration; two surfaces. As DreamFactory's CTO Kevin McGahey demonstrates in the platform's MCP tutorial, the entire pipeline — database credentials in a config file, MCP call, auto-generated API, read-only role, API key, working dashboard built by Claude Code — completes in minutes.

What a PostgreSQL MCP server does specifically

A PostgreSQL MCP server is an MCP server whose tools wrap PostgreSQL queries — typically via the auto-generated REST API described above. Instead of exposing a generic "run SQL" capability (high risk), a well-designed PostgreSQL MCP server exposes named, scoped, semantically meaningful tools — for example, "get_customer_by_email", "list_recent_orders", "compute_monthly_revenue".

The agent reads the tool descriptions, picks the right one for the user's question, and calls it with structured parameters. The MCP server resolves the call to the matching REST endpoint, which translates it into a safe, parameterised SQL query against PostgreSQL and returns the result. The agent never writes or sees raw SQL.

This matters because LLMs are non-deterministic. The same user request can produce dozens of slightly different SQL queries, some of which might be wasteful, expensive, or unsafe. Constraining the model to an allow-listed set of tools turns an unpredictable SQL-generation problem into a predictable tool-selection problem — far easier to audit, govern, and trust. DreamFactory's framing for this principle is "containment by design" — every AI action is bounded by the policies you set.

Why teams expose PostgreSQL to LLMs

The use cases break down into three categories:

  • Conversational analytics: A non-technical user asks the agent a question ("how many active customers did we add last month in EMEA?"), and the agent runs an appropriate query.
  • Operational assistance: An agent acting on behalf of a support engineer queries customer records, recent orders, or system status without the engineer needing to write SQL manually.
  • Agent-driven workflows: An autonomous agent reads from PostgreSQL as one step in a multi-step task.

The common thread: human work that previously required either custom dashboards, manual SQL, or hard-coded integration code, now done by an LLM that calls a constrained, governed set of tools against your database. DreamFactory's customers using this pattern include ExxonMobil, Toyota, Saint-Gobain, PPG, Deloitte, Google Cloud, AkerBP, Netgear, Miller Industries, the National Institutes of Health, the Vermont Agency of Transportation, D.A. Davidson, and Pillsbury Law.

The three ways to set up a PostgreSQL MCP server

Approach Best for What gets generated Time to set up Production-ready?
Anthropic's official postgres-mcp-server Prototyping, exploration, demos Read-only MCP server with parameterised query tool ~10 minutes No — needs hardening
Custom MCP server (TypeScript / Python SDK) Specialised tools, custom logic, niche control Whatever you build Days to weeks Yes if you build the controls
DreamFactory AI Data Gateway Production, multi-tenant, regulated, enterprise Full REST API over PostgreSQL, OpenAPI-documented, role-protected, rate-limited, audit-logged — plus an MCP server exposing the same endpoints to AI agents Minutes Yes — designed for it

1. Anthropic's official postgres-mcp-server

The simplest starting point. Anthropic publishes a reference implementation in the modelcontextprotocol/servers repository. It takes a PostgreSQL connection string and exposes a small set of read-only tools. Configure Claude Desktop to launch it, and you're querying your database from within a Claude conversation in under fifteen minutes.

The trade-offs are deliberate. The official server is meant as a reference, not a hardened production component. It runs with the credentials you give it (no per-user identity), it doesn't enforce query allow-listing beyond read-only, it has no built-in audit log, and it expects you to handle network exposure and TLS yourself. For prototyping, that's fine. For anything touching production data, layer additional controls on top — or skip to one of the next two approaches.

2. Custom MCP server using the SDK

The MCP SDKs (TypeScript and Python are the most mature) let you build a custom server from scratch. You define each tool, write the underlying query logic, and ship the result as a binary or container.

This makes sense when you want very specific, opinionated tools: "get_customer_lifetime_value", "find_similar_orders", "summarise_recent_activity_for_user". You're trading setup time for control. Plan on several days of engineering for a tightly-scoped server with a handful of tools, plus the ongoing work of building authentication, query governance, observability, and an audit log around it.

3. DreamFactory AI Data Gateway — REST API plus MCP, auto-generated

DreamFactory's AI Data Gateway takes the API-generation foundation the product has had for years and adds the MCP surface on top. Connect it to a PostgreSQL database and:

  1. DreamFactory introspects the PostgreSQL schema and auto-generates a full REST API — CRUD operations, OpenAPI documentation, related-data retrieval, filtering, sorting, pagination, bulk operations, stored procedure exposure.
  2. The same configuration wires the REST endpoints into DreamFactory's built-in MCP server, making them callable from Claude Desktop, Claude Code, ChatGPT, Cursor, Windsurf, or any other MCP-capable client.
  3. Authentication, RBAC, field-level redaction, rate limiting, and audit logging are applied at the gateway layer — applying equally to REST consumers (humans, mobile apps, integrations) and MCP consumers (AI agents).
  4. PostgreSQL's own governance (row-level security policies, role-based grants) continues to apply at the database layer, stacked underneath.

For most enterprise deployments — especially anything regulated — this is the path of least resistance to a production deployment.

How a PostgreSQL MCP server works end-to-end

The request-response flow for a typical query:

  1. User prompts the agent: "How many customers signed up last week?"
  2. Agent reads available tools from the MCP server's tool list. It sees a tool called "count_customers_in_date_range" — backed by an auto-generated REST endpoint — with parameters start_date and end_date.
  3. Agent decides to call the tool with the appropriate date range.
  4. MCP server resolves the call to the REST endpoint, validates the parameters, and DreamFactory's gateway evaluates RBAC.
  5. PostgreSQL executes the query under the appropriate role; any row-level security policy continues to filter results.
  6. DreamFactory applies any gateway-level field redaction required by the role policy.
  7. DreamFactory logs the interaction — prompt, tool call, parameters, query, result count — for audit.
  8. Agent receives the result and uses it to compose the final response.

Security controls every production PostgreSQL MCP server needs

Identity passthrough

The agent has its own credentials, but the audit trail and access decisions are driven by the calling human user. DreamFactory's RBAC layer authenticates the user via Entra ID, OAuth 2.0, SAML, LDAP, or Active Directory, then enforces per-table, per-endpoint, per-HTTP-verb access. PostgreSQL's role-based grants and row-level security policies remain in effect at the database layer.

Query allow-listing

Don't expose a generic "run any SQL" tool. Expose named tools that represent specific business operations. DreamFactory's auto-generated REST API gives you this by default — each endpoint is a specific operation; nothing outside the configured list is exposed.

Row- and column-level access controls

Pair PostgreSQL's row-level security with field-level redaction at the DreamFactory gateway. PostgreSQL RLS handles row-level filtering at the database layer; DreamFactory's per-role redaction handles column-level masking at the API layer. Both work together.

Audit logging

Every interaction logged: user prompt, agent's tool selection, parameters passed, resolved SQL, rows returned (or their count), and any redactions applied. DreamFactory captures user, timestamp, and payload for every API and MCP call.

Network exposure and air-gapping

For sensitive data, run the MCP server on-premises or in a VPC alongside PostgreSQL — never sending raw rows to public LLM providers. DreamFactory deploys via Linux installer, Docker, or Kubernetes (Helm chart), with air-gapped configurations supported. Healthcare, financial services, and government workloads typically require this.

Common pitfalls when exposing PostgreSQL to AI agents

  • Granting unrestricted SQL access. Constrain to named tools or a strict allow-list. The DreamFactory auto-generated REST API gives you this by construction.
  • Using a shared service account. Lose the audit trail and lose any row-level enforcement. Use identity passthrough via DreamFactory's RBAC layer.
  • Skipping query budgets. An agent can write a query that table-scans a hundred million rows. Set timeouts and row limits.
  • Exposing internal schema names. If your tools surface raw table and column names, the agent's responses leak schema details. Translate to business-friendly names in the API and MCP layer.
  • Forgetting about writes. Most deployments should keep MCP-exposed access read-only. DreamFactory's per-HTTP-verb RBAC makes this trivial: allow GET, refuse PUT/POST/DELETE for the AI role.

When to choose which approach

  • You're prototyping: Anthropic's official postgres-mcp-server.
  • You need very specific tools: Build a custom MCP server with the SDK.
  • You need a production REST API over PostgreSQL anyway, and want AI access too: DreamFactory. The REST API generation alone justifies the platform; the MCP layer is included.
  • You're going to production, the data is sensitive: DreamFactory.
  • Regulated industry: DreamFactory, almost always.

How DreamFactory fits in

DreamFactory's foundational capability is instant REST API generation: connect a database, get a complete, documented, role-protected REST API in minutes — no code written. The platform has been doing this for over a decade across 40+ supported data sources (SQL, NoSQL, cloud storage, SOAP services). Customers like ExxonMobil, Toyota, Saint-Gobain, PPG, Deloitte, Google Cloud, AkerBP, Netgear, Miller Industries, the National Institutes of Health, the Vermont Agency of Transportation, D.A. Davidson, and Pillsbury Law use DreamFactory as their data layer.

The AI Data Gateway is the newer chapter — an MCP server built into the platform that exposes the same auto-generated REST API to AI agents (Claude Desktop, Claude Code, ChatGPT, Cursor, Windsurf). The company describes it as "an auditable, governed API layer for AI": keep PostgreSQL intact, generate a governed API on top, and let AI agents query through that layer rather than receiving raw data extracts.

For a PostgreSQL deployment specifically, the platform provides:

  • Auto-generated REST API: CRUD operations, OpenAPI documentation, related-data retrieval, filtering, sorting, pagination, bulk operations, stored procedure exposure. Generated from the PostgreSQL schema in minutes.
  • Built-in MCP server: the same generated endpoints exposed to MCP-capable clients.
  • Authentication: Azure AD / Entra ID, OAuth 2.0, OpenID Connect, SAML 2.0, LDAP, Active Directory, API keys, session-based.
  • RBAC: per-table, per-endpoint, per-HTTP-verb. Same policy applies to REST and MCP traffic.
  • Field-level redaction: per role at the gateway, stacking on top of PostgreSQL's row-level security.
  • Audit: every call logged with user, timestamp, and payload.
  • Rate limiting: per user, role, or service.
  • Server-side scripting: pre- and post-process scripts in PHP, Python, or Node.js for custom business logic.
  • Deployment: Linux installer, Windows (IIS/Apache), Docker, Kubernetes via Helm chart, or NPX quick install. Air-gapped supported.

G2 rates DreamFactory 4.7 with badges for "Easiest to Use," "Fastest Implementation," and "Best ROI." See the DreamFactory AI page or request a demo.

Frequently asked questions

What is a PostgreSQL MCP server?

A PostgreSQL MCP server is a service that exposes a PostgreSQL database as tools an AI agent can call through the Model Context Protocol (MCP). The MCP server wraps queries as named tools with semantic descriptions; an MCP client like Claude reads the available tools and invokes them at runtime — letting the AI agent query PostgreSQL without writing raw SQL.

What is DreamFactory's core capability?

DreamFactory's foundational capability is instant REST API generation: connect a database (PostgreSQL, MySQL, SQL Server, Oracle, Snowflake, Databricks, MongoDB — 40+ supported), and the platform auto-generates a fully documented, role-protected REST API in minutes, with no code written. Authentication, RBAC, rate limiting, audit logging, and OpenAPI documentation are built in. The AI Data Gateway and MCP server sit on top of this same auto-generated API — agents call the same governed endpoints human developers would.

Does DreamFactory work without using MCP at all?

Yes. DreamFactory has been an API generation platform for over a decade and is widely deployed for use cases that have nothing to do with AI — modernising legacy systems, syncing enterprise data, and exposing data products inside large enterprises. MCP support is the newest surface; the REST API generation is the foundation.

How do I connect Claude to PostgreSQL?

Configure an MCP server that connects to your PostgreSQL database, then configure Claude Desktop, Claude Code, or the Claude API with MCP enabled to launch it. For prototyping, Anthropic's official postgres-mcp-server is the fastest path; for production, DreamFactory auto-generates a fully documented, secure REST API from your PostgreSQL schema in minutes and exposes the same endpoints to MCP clients via its built-in MCP server.

Is the Anthropic postgres-mcp-server safe for production use?

It's a useful starting point for prototyping but not production-ready out of the box. The official example provides read-only query access but lacks several controls enterprise teams need: identity passthrough, query allow-listing, field-level redaction, and audit logging. For production, layer these on top — or use DreamFactory, which provides them as part of the auto-generated API.

How does identity flow from an AI agent to PostgreSQL?

DreamFactory authenticates the calling user via any of OAuth 2.0, OpenID Connect, Azure AD / Entra ID, SAML 2.0, LDAP, or Active Directory. The user's identity drives DreamFactory's RBAC decision at the gateway. PostgreSQL's own row-level security policies continue to apply to queries DreamFactory issues, so PostgreSQL-native controls and gateway-level controls stack.

What's the difference between an MCP server and a REST API?

A REST API exposes resources via HTTP methods on URLs — designed for human developers and code-driven clients. An MCP server exposes capabilities specifically for AI agents: tools have semantic descriptions optimised for LLM reasoning, type-safe parameters, and a discovery mechanism. MCP servers often wrap underlying REST APIs — which is exactly what DreamFactory does: the same auto-generated REST endpoints are exposed via both surfaces.

How do I secure a PostgreSQL MCP server?

Five essential controls: (1) Don't expose raw SQL — use allow-listed named tools backed by the auto-generated REST endpoints. (2) Use identity passthrough so the agent inherits user permissions. (3) Apply row- and column-level access controls (PostgreSQL's row-level security stacks with DreamFactory's gateway RBAC). (4) Log every prompt, query, and response for audit. (5) Run on-premises or in a VPC. See Zero Trust for LLMs.

Can a PostgreSQL MCP server perform writes (INSERT/UPDATE/DELETE)?

Technically yes, but it's the highest-risk configuration. Most production deployments restrict MCP-exposed PostgreSQL access to reads only. If writes are required, isolate them behind explicit confirmation and constrain to allow-listed operations on specific tables. DreamFactory's RBAC per HTTP-verb makes this configuration trivial — allow GET, refuse PUT/POST/DELETE for the AI role.

Is DreamFactory safe for regulated industries?

Yes. DreamFactory's regulated-industry customers include the National Institutes of Health (government health research, on-prem SQL Server), Vermont Agency of Transportation (air-gapped Oracle and IBM mainframe), D.A. Davidson (financial services), and Pillsbury Law (legal). All run DreamFactory in production with role-based access control, audit logging, and either on-prem or air-gapped deployment.

Related blog posts