How to Use the pgEdge MCP Server for PostgreSQL with Claude Cowork
The rise of agentic AI is transforming how we build applications, and databases are at the center of this transformation. As AI agents become more sophisticated, they need reliable, real-time access to data.
If you’ve decided to use an MCP server for exposing data to large language models (LLMs) to build internal tools for trusted users, apply sophisticated database schema changes, or translate natural language into SQL, you might find the pgedge-postgres-mcp project (available on GitHub) useful to try.
This 100% open source Natural Language Agent for PostgreSQL provides a connection between any MCP-compatible client (including AI assistants like Claude) and any standard flavor of Postgres, whether you’re creating a new greenfield project or are using an existing database.
Connecting AI agents to PostgreSQL with pgedge-postgres-mcp
The Model Context Protocol (MCP) is a standardized way for AI assistants to communicate with external data sources. Think of MCP as a universal adapter; just as USB-C provides a standard connection for devices, MCP provides a standard way for AI agents to connect to tools, databases, and services.
pgedge-postgres-mcp implements this protocol specifically for PostgreSQL, creating a bridge between AI assistants and your data. It enables users to:
Query databases using natural language
Execute SQL queries safely in read-only transactions
Access local or distributed PostgreSQL instances
Work with production data safely (read-only by default)
Interact with database schemas and metadata
Instead of writing custom integration code for each AI application, you get a ready-to-use connection between AI agents and your PostgreSQL database. It works with any PostgreSQL instance, whether you're running locally for development or hosting remotely.
Benefits for AI Development
The pgEdge MCP Server solves a specific problem: giving AI assistants database access without building custom middleware, with configurable controls for authentication, read-only transactions, and per-database permissions.
For developers, it means:
No custom database connectors to write and maintain
Faster prototyping of AI-powered applications
Natural language database operations through Claude
The ability to test and iterate on AI workflows without writing integration code
For teams running production systems:
Read-only transaction enforcement for safety
Works with both single-node and distributed PostgreSQL
Open source with production-ready performance
Support for complex queries with semantic search capabilities
The pgEdge MCP Server also provides flexibility. You can start with quick prototypes or enterprise-grade production AI applications.
Claude vs. Claude Cowork: What's the Difference?
Claude is Anthropic's AI assistant available through claude.ai and API. It's conversational and helps with analysis, writing, code generation, and problem-solving. When you chat with Claude in your browser or integrate it into applications through the API, you're using the core AI model.
Claude Cowork is a research preview feature within the Claude Desktop application that brings agentic capabilities to Claude. Unlike standard chat interactions where Claude responds to one message at a time, Cowork enables Claude to take on complex, multi-step tasks and execute them autonomously on your behalf.
The key differences:
Agentic execution: Cowork can break complex projects into parallel workstreams and coordinate sub-agents to complete them
Extended processing: Work on complex tasks for extended periods without conversation timeouts or context limits
Direct file access: Read and write files on your local system without manual uploads and downloads
Professional outputs: Generate Excel files with formulas, PowerPoint presentations, formatted documents, and more
When you connect the pgEdge MCP Server to Claude Cowork, the AI can query your database, analyze the results, and produce complete deliverables—all autonomously. For example, you could ask Cowork to "analyze our customer data and create a quarterly report with charts," then return later to find a finished document ready for review.
Getting Started: The Simplest Example
If you have Go installed and a PostgreSQL database accessible from your machine, the fastest way to get started is to build the MCP server from source and connect it directly to Claude Desktop using stdio.
Prerequisites
Claude Desktop application installed. The MCP Server works with Claude Desktop, even without CoWork
Claude Pro, Max, Team, or Enterprise subscription with access to Cowork
Go 1.24.0 or later installed
PostgreSQL running locally or on your network
Step 1: Build the MCP Server
Clone the repository and build the server binary:
git clone https://github.com/pgEdge/pgedge-postgres-mcp.git
cd pgedge-postgres-mcp
make build-serverThis produces the binary at bin/pgedge-postgres-mcp.
Step 2: Configure Claude Desktop
Edit the Claude Desktop configuration file at ~/Library/Application
Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"pgedge": {
"command": "/path/to/pgedge-postgres-mcp/bin/pgedge-postgres-mcp",
"env": {
"PGHOST": "localhost",
"PGPORT": "5432",
"PGDATABASE": "your-database-name",
"PGUSER": "your-database-user",
"PGPASSWORD": "your-database-password"
}
}
}Replace the path and database credentials with your own. Restart Claude Desktop, and the pgEdge MCP Server will appear as an available tool.
This approach runs the MCP server as a subprocess of Claude Desktop, communicating over stdio. There is no HTTP server, no authentication tokens, and no Node.js dependency. The trade-off is that the database must be directly reachable from the machine running Claude Desktop.
Getting Started: The Deep Dive
The stdio approach is ideal for quick experimentation, but for team environments, remote access, or production deployments, you will want to run the MCP server as a network service. The following sections cover deploying the server in HTTP mode—either as a standalone binary or as a Docker container—and configuring authentication, multiple databases, and Claude Desktop connectivity.
Step 1: Install the pgEdge MCP Server
The pgEdge MCP Server can be deployed as a standalone binary managed by systemd, or as a Docker container. Choose the option that best fits your infrastructure.
Option A: Download the Binary
Download the pre-built binary for your platform from the pgEdge Enterprise repository. This option works well with systemd for service management.
You will need to configure authentication tokens for Claude Desktop to connect. See Step 2 for configuration details.
Option B: Docker Compose
For a containerized deployment, use the Docker Compose setup from the GitHub repository:
git clone https://github.com/pgEdge/pgedge-postgres-mcp.git
cd pgedge-postgres-mcp
cp .env.example .env
Edit the .env file with your configuration. At minimum, you need to set:
# Database connection
PGEDGE_DB_HOST=your-postgres-host
PGEDGE_DB_PORT=5432
PGEDGE_DB_NAME=your-database-name
PGEDGE_DB_USER=your-database-user
PGEDGE_DB_PASSWORD=your-database-password
# LLM API key (for the web interface)
PGEDGE_ANTHROPIC_API_KEY=your-anthropic-api-key
# Or use OpenAI:
# PGEDGE_OPENAI_API_KEY=your-openai-api-key
# Authentication token for API access (required for Claude Desktop)
INIT_TOKENS=my-secret-token-12345The INIT_TOKENS setting creates an API token that you will use to connect Claude Desktop to the MCP server. Choose a secure token value and save it—you will need it in Step 3.
For local development only, you can disable authentication by not setting INIT_TOKENS, but this is not recommended for production or network-accessible deployments.
After configuring your .env file, start the containers:
docker-compose up -dThis deploys both the MCP server (port 8080) and a web interface (port 8081). The web interface at http://localhost:8081 is useful for testing your connection and exploring the server's capabilities.
The .env approach is convenient for development, but for production deployments you will want to use a YAML configuration file instead. The repository includes a examples/docker-compose.production.yml that mounts a local directory for the server configuration file, giving you full control over all server options. If you use the production compose file, read Step 2 below for guidance on the YAML configuration format and token creation. For details, see the Docker deployment guide.
Step 2: Configure the MCP Server (Binary Deployments)
If you are using the standard docker-compose.yml with an .env file, you can skip this step—configuration is handled through environment variables.
For binary deployments, the MCP server reads its configuration from a YAML file named postgres-mcp.yaml. By default, the server searches for this file in /etc/pgedge/ first, then in the same directory as the binary. You can specify a different location using the -config flag:
./pgedge-postgres-mcp -config /path/to/your-config.yamlBasic Configuration for HTTP Mode
If you are deploying the binary as a network service (e.g., with systemd), create a configuration file with HTTP mode enabled:
# Enable HTTP mode for network access
http:
enabled: true
address: ":8080"
auth:
enabled: true
token_file: "./postgres-mcp-tokens.yaml"
# Database connection
databases:
- name: "mydb"
host: "localhost"
port: 5432
database: "mydb"
user: "postgres"
password: "" # Leave empty to use .pgpass file
sslmode: "prefer"
allow_writes: false # Default: false (read-only)Creating Authentication Tokens
For HTTP mode deployments, you need to create authentication tokens before clients can connect. Use the built-in token management command:
./pgedge-postgres-mcp -add-token -token-note "Claude Desktop"This generates a new token and displays it. Save this token—you will need it to configure Claude Desktop in Step 3.
You can also list existing tokens:
./pgedge-postgres-mcp -list-tokensFor local development, you can disable authentication by setting http.auth.enabled: false in your configuration file, or by passing the -no-auth flag when starting the server. This is not recommended for production or network-accessible deployments.
Configuring Multiple Databases
In beta 3 or later the MCP server supports multiple database connections with agents like Claude, allowing users to switch between different databases at runtime. This is useful for environments with separate development, staging, and production databases:
http:
enabled: true
address: ":8080"
auth:
enabled: true
token_file: "./postgres-mcp-tokens.yaml"
# Built-in tools configuration
# Enable or disable specific MCP tools
builtins:
tools:
# Enable LLM to list and switch database connections
# When enabled, provides list_database_connections and select_database_connection tools
# Use allow_llm_switching on individual databases to control which are visible
llm_connection_selection: false # Default: false (disabled for security)
databases:
- name: "production"
host: "prod-db.example.com"
port: 5432
database: "prod_db"
user: "readonly_user"
password: ""
sslmode: "require"
allow_writes: false
available_to_users: [] # Empty = all users can access
allow_llm_switching: true
- name: "staging"
host: "staging-db.example.com"
port: 5432
database: "staging_db"
user: "developer"
password: ""
sslmode: "prefer"
available_to_users:
- "alice"
- "bob"
allow_llm_switching: true
- name: "development"
host: "localhost"
port: 5432
database: "dev_db"
user: "postgres"
password: ""
sslmode: "disable"
allow_writes: true # Allow writes on dev only
allow_llm_switching: trueEach database must have a unique name that users reference when switching connections. The available_to_users field controls which users can access each database—an empty list means all authenticated users have access.
The llm_connection_selection and allow_llm_switching options (available from v1.0.0-beta3) allow agents such as Claude to select a different database connection using MCP Tools (unlike the pgEdge Natural Language Agents which use more tightly controlled REST APIs).
For a complete configuration reference with all available options, see the server configuration documentation.
Step 3: Connect Claude Desktop to the MCP Server
How you connect Claude Desktop to the MCP server depends on your deployment method.
Claude Desktop Configuration File
Claude Desktop reads MCP server configurations from a JSON file. On macOS, this file is located at:
~/Library/Application Support/Claude/claude_desktop_config.jsonIf this file does not exist, create it with a text editor.
For HTTP Mode (Docker or Binary with systemd)
When the MCP server is running in HTTP mode, you need to use the mcp-remote proxy to connect. This requires passing the authentication token you configured in Step 1 or Step 2:
{
"mcpServers": {
"pgedge": {
"command": "npx",
"args": [
"mcp-remote",
"http://localhost:8080/mcp/v1",
"--header",
"Authorization: Bearer my-secret-token-12345"
]
}
}
}Replace my-secret-token-12345 with the token you configured in your .env file (for Docker) or created with -add-token (for binary deployments).
Replace http://localhost:8080/mcp/v1 with your server's address if it is running on a different host or port. The /mcp/v1 path is required.
Note: This method requires Node.js to be installed on your system. The npx command (included with Node.js) automatically downloads and runs the mcp-remote package. If you don't have Node.js installed, you can install it on macOS with Homebrew:
brew install nodeAlternatively, download it from nodejs.org.
After saving the file, restart Claude Desktop for the changes to take effect.
For Remote HTTPS Servers
When the MCP server is deployed remotely with HTTPS enabled, you can use Claude Desktop's built-in connector feature instead of editing the configuration file:
Open Claude Desktop and go to Settings > Connectors
Click Add custom connector
Enter your MCP server URL (e.g., https://mcp.example.com:8080)
Click Add
This method requires HTTPS—Claude Desktop's custom connector feature does not support plain HTTP URLs.
Step 4: Query Your Database
Now you can ask Claude Cowork to work with your database using plain English:
You: "Show me the top 10 customers by revenue."
Claude Cowork: Connects to the database and executes a query
SELECT customer_name, SUM(order_total) as total_revenue
FROM customers c
JOIN orders o ON c.customer_id = o.customer_id
GROUP BY customer_name
ORDER BY total_revenue DESC
LIMIT 10;Returns results in a formatted table
The AI understands your request, translates it into appropriate SQL (inferring the semantics of the database from the schema object names it finds), executes the query in a read-only transaction, and presents the results in a readable format.
Step 5: Combine Database Queries with File Operations
The real power of Cowork emerges when you combine database queries with its file system capabilities:
"Pull last month's sales data and create an Excel spreadsheet with charts showing trends by region"
"Analyze customer purchase patterns and generate a PowerPoint presentation with key insights"
"Query the orders table for delayed shipments and create a formatted report I can send to the logistics team"
"Compare this quarter's revenue to last quarter and save a summary document to my Desktop"
Because Claude Cowork can interact with both your database and your file system, it can turn database insights into actionable outputs—reports, charts, presentations, or spreadsheets—all without manual intervention. You can describe the outcome you want, let Cowork run, and return to find completed deliverables.
Step 6: Explore Your Database Structure
You can also ask Claude Cowork to help you understand your databases:
You: "What tables are in the database and what do they contain?"
You: "Show me the schema for the orders table."
You: "What columns have foreign key relationships?"
This is helpful when working with unfamiliar databases or when you need to understand how data is structured before writing more complex queries.
Common Use Cases
Here are some practical ways to use the pgEdge MCP Server with Claude Cowork.
Ad-hoc reporting: "Pull all orders from Q4 where the shipping cost exceeded 10% of the order value, and put it in a spreadsheet with a summary tab." Instead of writing the query, exporting to CSV, and formatting in Excel, you describe the output and get a finished file.
Investigating issues: A customer reports a billing discrepancy. Ask Cowork to trace their order history, compare invoice amounts to order totals, and flag any mismatches. You get a summary of what it found rather than spending an hour joining tables.
Understanding unfamiliar databases: You inherit a database with 200 tables and no documentation. Ask Cowork to explore the schema, identify the core entities, and explain how they relate. It's faster than reading through pg_catalog yourself.
Data quality checks: "Are there any orders with a ship date before the order date? Any customers with duplicate email addresses?" Run sanity checks in plain English and get results you can act on.
'Building queries: When you know what you want but not the exact SQL, describe it and let Cowork write the query. Review what it produces, learn the schema as you go, and iterate until you have what you need.
Why This Matters
The pgEdge MCP Server combined with Claude Cowork changes how you interact with databases. Instead of SQL being a barrier between you and your data, an AI agent becomes an intelligent intermediary that understands both what you want and how your database is structured—and can deliver complete, polished outputs.
Enterprise-Grade Postgres Features for Production Use
As you move from experimentation to production use, the foundation matters. The pgEdge MCP Server is built with enterprise requirements in mind:
Security: Read-only transactions by default, TLS support, token authentication
Reliability: Production-tested connection handling and query execution
Performance: Connection pooling and efficient query processing
Compatibility: Support for standard PostgreSQL features, extensions, and tools including pgvector for semantic search
The Distributed Postgres Advantage
This becomes particularly important for distributed PostgreSQL deployments. pgEdge's infrastructure is built for multi-region, active-active database architectures. As your database scales across nodes and regions, having an AI agent that can intelligently query the right data becomes
increasingly valuable.
Getting Started Today
There are ever more possibilities for interacting with databases in new ways as technology progresses. With Claude Cowork and the pgEdge MCP Server, you can describe the outcome you want—a report, an analysis, a data export—and let an AI agent handle the queries, analysis, and document creation. It’s a new way to work with data that can help expedite information management in the database.
The pgEdge MCP Server is open source under the PostgreSQL license and is ready to use. Visit the pgEdge MCP Server GitHub repository for documentation and configuration examples.
To download the MCP Server binary, visit the pgEdge Enterprise repository. For containerized deployments, clone the repository and use the included Docker Compose setup, or follow the Docker deployment guide.
The pgEdge MCP Server is part of the larger pgEdge Agentic AI Toolkit, which includes additional tools and integrations for building AI-powered applications on PostgreSQL. Learn more at pgedge.com/ai.



