If you’re building agentic AI applications, you’ve probably already hit the wall where your LLM needs to actually talk to a database. Not just dump a schema and hope for the best, but genuinely understand the data model, write reasonable queries, generate code for new UIs and even entire applications, and do it all without you holding its hand through every interaction. That’s the problem MCP servers are supposed to solve, and most of them do a decent enough job of it when you’re prototyping on your laptop.

Production is a different story.

The pgEdge MCP Server for Postgres is now generally available, and it’s also available as a managed service inside pgEdge Cloud. We built it because the gap between “works in a demo” and “runs in production with real security, real availability requirements, and real compliance constraints” is wider than most people realize, and the existing MCP servers out there weren’t closing it.

The Problem With Most MCP Servers

Here’s what typically happens. You grab an MCP server, wire it up to Claude Code or Cursor, point it at a local Postgres instance, and everything works great. Your LLM can introspect the schema, write queries, generate application code and UIs, and even suggest optimizations. You feel like you’re living in the future.

Then someone asks you to run it against the production database. The one with PII in it. The one that needs to stay in eu-west-1 or on-prem for compliance reasons. The one that can’t go down because three other services depend on it. And suddenly you’re staring at a tool that doesn’t support TLS, doesn’t have real authentication, can’t enforce read-only access, and definitely wasn’t designed to run in an air-gapped environment.

So we built one that closes it. The pgEdge MCP Server works with any standard Postgres database running v14 or newer (not just pgEdge’s own products), and it’s designed from the ground up for the kind of environments where “just spin it up and see what happens” isn’t an acceptable deployment strategy.

What Ships in pgEdge MCP Server v1.0

Let’s talk about what’s in the box, because the feature list matters more than the marketing language around it.

The first thing you’ll notice is the schema introspection. The server doesn’t just hand the LLM a list of table names and call it a day. It pulls primary keys, foreign keys, indexes, column types, constraints, and even partitioned table hierarchies, which means the LLM can actually reason about how your data model fits together instead of blindly firing off SELECT * queries and hoping something useful comes back. For databases with time-based partitioning (where you might have hundreds of child tables cluttering the context window), it recognizes partitioned parents and hides the children by default, keeping things clean for the model to work with.

On the security side, this isn’t a “we’ll add auth later” situation. The server supports stdio, HTTP, and HTTPS with TLS out of the box, along with user and token authentication that hot-reloads so you can rotate credentials without restarting anything. Read-only mode is enforced by default, and the 1.0 release goes further with active defense against bypass attacks, rejecting queries that try to manipulate transaction_read_only settings through PL/pgSQL DO blocks or set_config() calls. The net effect is that you can give your LLM agent access to production data without giving it the keys to the castle.

Multi-database support lets you connect to dev, staging, and production from the same server and switch between them, which sounds simple until you realize most MCP setups require a separate configuration for every environment you touch.

What’s New in the GA Release

The pgEdge MCP Server v1.0 release adds a bunch of capabilities that came directly out of developer feedback during the beta, and a few of them fundamentally change what you can do with the server.

The biggest one is probably custom tools. You can now extend the MCP server by writing tools in SQL, Python, Perl, or JavaScript, defining them in a YAML file and dropping them into your configuration. They show up as first-class MCP tools alongside the built-in ones, which means if your team has a specific workflow or analysis you run regularly, you can package it as something the LLM can invoke directly. This is where things start to get genuinely interesting for teams with domain-specific database operations they want to bring into their AI workflows.

To show what that looks like in practice, we’re shipping a DBA starter pack as a drop-in YAML definitions file. It comes with three pre-built tools: get_top_queries for analyzing your most resource-consuming queries, analyze_db_health for running a seven-category health check, and recommend_indexes for two-tier index recommendations with optional HypoPG simulation. Think of it as giving your LLM a solid foundation of DBA knowledge out of the box, so it can start being useful for performance work without you having to teach it everything from scratch. (Read the blog Replicating CrystalDBA With pgEdge MCP Server Custom Tools to learn more about this).

On the operational side, there are a few things worth calling out. Multi-host connection support means the server handles HA and failover natively, using target_session_attrs to route reads to standbys and writes to the primary through libpq-compatible connection strings. Write query confirmation prompts you before the server executes any DDL or DML when write access is enabled, and the server sets MCP tool annotations (destructiveHint, readOnlyHint) so third-party clients can implement their own confirmation flows. And one-command installers for Claude Code and Claude Desktop automate the binary download, config generation, and client registration, so you don’t have to manually edit JSON files just to get started.

There’s also a set of changes you’ll feel more than see. The server now uses tab-separated values instead of JSON for query results, paginates automatically, and applies context window compaction, all of which reduces token usage significantly. If you’re running agents that make a lot of database calls (and the interesting ones always do), this hits your API bill in a good way.

Works With What You’re Already Using

The pgEdge MCP Server supports the tools developers are actually reaching for today: Claude Code, Claude Desktop, Cursor, Windsurf, VS Code Copilot. On the model side, it works with Anthropic and OpenAI frontier models, plus locally hosted models through Ollama, LM Studio, and anything else that speaks the OpenAI API. The production CLI client includes Anthropic prompt caching, which cuts costs by up to 90% for repeated interactions against the same schema. You don’t have to rearchitect your AI stack to use this.

Deploy It Your Way

Every database product talks about deployment flexibility, but here’s what it actually looks like. If you want the fully managed path, pgEdge Cloud deploys the MCP Server alongside your database cluster with nothing extra to set up or maintain. If you want to run it yourself, there are Docker images on GitHub Container Registry and Docker Compose support for multi-instance deployments, giving you full control over configuration and networking in your own cloud environment. And if you need to run on-premises, including in air-gapped environments where nothing touches the public internet, the server compiles to a fully static binary with no C compiler dependency, so there’s nothing to fight with in locked-down environments.

The whole thing is open source under the PostgreSQL license, and pgEdge customers with paid subscriptions for Enterprise Postgres or Distributed Postgres get support at no extra cost.

Get Started

If you want the managed path, pgEdge Cloud has the MCP Server ready to go. Spin up a database, enable the MCP Server, and start querying.

For self-hosted deployments, see the documentation or download now.  It works with any standard Postgres v14+, so you can point it at whatever database you’re already running.

If you want to kick the tires without any setup at all, the GitHub Codespaces demo gives you a one-click browser-based environment with sample data loaded and ready to query.

And if you’re in New York on April 2-3, come find us at the MCP Dev Summit 2026 at booth S13. We’ll be running live demos and would love to show you what this looks like in practice.