Database latency and network availability can quickly become bottlenecks when building modern, globally distributed applications. pgEdge solves this with a fully distributed, multi-master PostgreSQL implementation—and now it's easier than ever to deploy with the pgEdge Helm Chart.

This post walks you through deploying two flexible cluster topologies using Helm, and how to enable the ACE Active Consistency Engine for data drift detection and correction. This workflow is designed to deploy a Kubernetes-native environment that works locally or deployed across regions.

What You'll Need

To follow along with this guide, make sure you have:

  • A working Kubernetes environment (e.g., Minikube, kind, or a cloud setup)

  • Helm 3+

  • kubectl for managing k8s clusters

  • Docker Desktop for local development

  • make to run the makefile commands

Optional (but recommended):

  • Multiple K8s clusters that simulate multi-region deployments

  • Ingress controller or service mesh to expose endpoints

  • Cilium CLI & subctl for multi-zone local clusters

Configuration and Customization

Before deployment, review the values.yaml file where you can configure:

  • Database and user names

  • Node-specific settings

  • Networking options

  • Resource allocations

  • Security configurations

The pgEdge Helm Chart creates pgEdge as a StatefulSet and can be deployed locally with kind or on cloud platforms like Azure Kubernetes Service, with easy modification for other Kubernetes service providers.

Cluster Topologies with the Helm Chart

The pgEdge Helm chart supports two primary deployment patterns:

1. Local Multi-Node Cluster (Single K8s Cluster)

This deployment type is ideal for dev/test workflows or CI pipelines. The deployment creates multiple Postgres nodes in a single Kubernetes cluster, typically using local environments like Minikube or kind. The following commands were tested using kind.

Features:

  • All nodes run in one Kubernetes cluster

  • Lower overhead, faster startup

  • Useful for integration tests and proof-of-concept demos

  • Fast setup with minimal networking complexity

Use the following built-in makefile commands to set up a single zone cluster:

# Get a local copy of the GitHub repository on your machine

git clone https://github.com/pgEdge/pgedge-helm.git

# Change directory to your new folder

cd pgedge-helm

# Create the local Kubernetes cluster

make single-up

# Install the Helm chart on the local cluster

make single-install

# Monitor deployment status

kubectl get pods

Ready to test the deployment? Once your deployment is up and running, you can connect to both pods in two different terminal windows.

# In window 1, connect to pod 0:

kubectl --context kind-single exec -it pgedge-0 --psql -U app defaultdb

# In window 2, connect to pod 1:

kubectl --context kind-single exec -it pgedge-1 --psql -U app defaultdb

# In pod 0, time to create a table:

create table public.users (id uuid default gen_random_uuid(), name text, primary key(id));

# Verify the table was created on first pod 0, then pod 1:

\d

# Test adding in a new row in pod 1:

insert into users (name) values ('Alison'),('Lee'),('Cheyenne');

# Check that the row was inserted correctly in pod 1, then do

# the same in pod 0:

select * from users;

When you’re ready to clean up:

make single-down

Logical replication with automatic DDL updates enabled ensures that both DDL (schema changes) and DML (data modifications) are synchronized across all nodes in the cluster.

2. Multi-Region Active-Active Cluster (Multiple K8s Clusters)

This is where pgEdge shines—deploying globally distributed, synchronized PostgreSQL nodes across different geographic regions. Our Helm chart allows you to deploy a multi-zone cluster for testing using Cilium to create secure networking between the zones, in our example designated as IAD and SFO.

Features:

  • Each node runs in its own region or cluster

  • Full mesh logical replication between nodes

  • Built-in support for conflict resolution, failover, and HA

  • Nodes operate across cloud regions or data centers

  • Delivers ultra-low latency for globally distributed apps

The deployment process closely resembles the above steps, but with this method, you’ll end up with two pgEdge nodes in two zones.

For multi-zone testing with secure networking, you can use the following make commands. This can take a while as it creates two k8s clusters and uses Cilium to set up secure communication between them.

# Create multiple Kubernetes clusters with Cilium networking

make multi-up

# Install pgEdge on the clusters

make multi-install

Now it’s time to test the deployment. At this point, you can use the kubectl exec commands from above to connect to the nodes, create a table, and experiment with logical replication.

When you’re finished, take down the clusters with one command:

make multi-down

Enabling the ACE Active Consistency Engine

One of the most powerful features of pgEdge is the ACE Active Consistency Engine—designed to detect and fix data drift across replicated nodes automatically. It quickly and efficiently detects and repairs any data divergence that result from exceptions or infrastructure failures.

Using ACE for Data Repair

Follow these steps to run ACE directly on your pods:

# Install the ACE pod on the Kubernetes cluster

kubectl apply -f ace/ace.yaml

# Log into the ACE pod shell

kubectl exec -it ace -- /bin/bash

# Use ACE to find differences in tables

./ace table-diff defaultdb public.users

# The above command will output a filename referencing where diffs

# were written to. Grab that name, and use it in this next step

# to perform a dry run repair:

./ace table-repair defaultdb public.users \

--diff-file=/opt/pgedge/diffs/<$DATE>/<$JSON_FILE> \

    --source-of-truth=pgedge-0 --dry_run=True

# Execute the actual repair (again substituting out the date and

# JSON file name)

./ace table-repair defaultdb public.users \

    --diff-file=/opt/pgedge/diffs/<$DATE>/<$JSON_FILE> \

    --source-of-truth=pgedge-0

ACE can examine and repair tables with millions of rows efficiently and flexibly, and can even be run automatically to monitor and repair conflicts. For more information about ACE, review the online docs.

Run ACE on a Schedule

To set up ACE on a recurring schedule

# Example values.yaml

consistencyEngine:
enabled: true
schedule: "*/5 " # run every 5 minutes

Once enabled, the engine continuously monitors nodes for schema and row-level inconsistencies based on predefined policies. This is critical for real-time, multi-region applications where consistency guarantees matter.

Production Considerations

When deploying pgEdge in production environments:

  • Resource Planning: Ensure you have adequate CPU, memory, and storage for each node

  • Network Security: Configure proper firewall rules and VPN connections between regions

  • Monitoring: Set up monitoring for replication lag, consistency checks, and node health

  • Backup Strategy: Implement regular backups even with distributed architecture

  • Scaling: Plan for horizontal scaling by adding new nodes to existing clusters

Why Choose pgEdge?

pgEdge offers several compelling advantages for distributed PostgreSQL deployments:

Multi-Master Architecture: Deploy active-active PostgreSQL clusters where any node can handle both read and write operations, eliminating single points of failure and enabling true geographic distribution.

PostgreSQL Compatibility: Built on PostgreSQL 15 and later versions, pgEdge maintains full compatibility with existing PostgreSQL applications and tools while adding distributed capabilities.

Simplified Setup: Deployments can be pre-configured with multi-master replication, user management, and networking, reducing setup complexity for development teams.

Realistic Testing Environment: Developers can quickly spin up clusters with sample datasets to test distributed scenarios and validate application behavior under realistic conditions.

Standard PostgreSQL Tooling: Works seamlessly with existing PostgreSQL clients and administration tools, requiring no changes to application connection logic or specialized drivers.

Development-Friendly Licensing: Available for development and evaluation use without licensing restrictions, making it accessible for teams exploring distributed database architectures.

The combination of Kubernetes-native deployment via Helm, built-in consistency monitoring, and zero vendor lock-in makes pgEdge an ideal choice for teams building distributed applications that demand both performance and reliability.

More Resources

Want to see the full flow in action—from chart installation to live cluster sync?

Watch the video here

It covers:

  • Adding the pgEdge Helm repo

  • Deploying both local and multi-region clusters

  • Enabling consistency checks

  • Verifying replication and node health

More useful links:

If you're building applications that span geographies or demand high uptime, distributed PostgreSQL isn’t optional—it’s a requirement. The pgEdge Helm chart gives you a clean, reproducible way to deploy multi-region, active-active clusters with PostgreSQL.

Helm-native deployments reduce friction. The ACE Active Consistency Engine gives you peace of mind, pgEdge provides reliability, and Kubernetes brings the portability.

Ready to test it in your stack? Get started now.