Multi-Master Replication: Using pgEdge Enterprise Postgres with Spock and CloudNativePG
When I started exploring using CloudNativePG (CNPG) with pgEdge Enterprise Postgres and the Spock extension, I realized there were a few gotchas that weren’t obvious at first. In this blog post, I’ll share my experiences running a pgEdge docker image with Spock inside CloudNativePG, along with the lessons I learned about using superuser access, configuration management, and initialization scripts.
In our CNPG cluster, each cluster has 3 Postgres pods inside. CNPG by default enables physical replication inside each CNPG cluster with a Read/Write primary and two read-only replicas. Our two CNPG clusters use Spock to enable bi-directional logical replication between the clusters, but internally also are replicating, allowing us to enable connection pooling to ensure resources are available for queries if needed.
The steps that follow demonstrate running a pgEdge Enterprise Postgres docker image with Spock inside CNPG, setting up nodes, and configuring bi-directional replication
1. Installing the CloudNativePG Operator
The CNPG operator manages PostgreSQL clusters in a Kubernetes environment. Install it with the following commands:
helm repo add cnpg https://cloudnative-pg.github.io/charts
helm repo update
helm install cnpg cnpg/cloudnative-pg \
--namespace cnpg-system --create-namespace
kubectl get pods -n cnpg-systemThe helm install command deploys the operator in cnpg-system; kubectl get pods confirms that the pods are running.
2. Deploying PostgreSQL Clusters
In this step, we deploy two clusters (A and B) that we'll configure for bi-directional replication with the Spock extension.
Cluster A (cluster-a.yaml)
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: cluster-a
namespace: cnpg-system
spec:
instances: 3
imageName: ghcr.io/pgedge/pgedge-postgres:17-spock5-minimal
enableSuperuserAccess: true
storage:
size: 50Gi
postgresql:
shared_preload_libraries:
- "spock"
parameters:
shared_buffers: "4GB"
effective_cache_size: "12GB"
maintenance_work_mem: "2GB"
work_mem: "64MB"
max_connections: "200"
track_commit_timestamp: "on"
wal_level: "logical"
max_replication_slots: "20"
max_wal_senders: "20"
max_worker_processes: "12"
effective_io_concurrency: "200"
random_page_cost: "1.1"
checkpoint_completion_target: "0.9"
bootstrap:
initdb:
database: appdb
owner: appuser
postInitApplicationSQL:
- CREATE EXTENSION IF NOT EXISTS spock;Cluster B (cluster-b.yaml)
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: cluster-b
namespace: cnpg-system
spec:
instances: 3
imageName: ghcr.io/pgedge/pgedge-postgres:17-spock5-minimal
enableSuperuserAccess: true
storage:
size: 50Gi
postgresql:
shared_preload_libraries:
- "spock"
parameters:
shared_buffers: "4GB"
effective_cache_size: "12GB"
maintenance_work_mem: "2GB"
work_mem: "64MB"
max_connections: "200"
track_commit_timestamp: "on"
wal_level: "logical"
max_replication_slots: "20"
max_wal_senders: "20"
max_worker_processes: "12"
effective_io_concurrency: "200"
random_page_cost: "1.1"
checkpoint_completion_target: "0.9"
bootstrap:
initdb:
database: appdb
owner: appuser
postInitApplicationSQL:
- CREATE EXTENSION IF NOT EXISTS spock;Next, use the kubectl command to create the clusters:
kubectl apply -f cluster-a.yaml
kubectl apply -f cluster-b.yaml
kubectl get cluster -n cnpg-system
kubectl get pods -n cnpg-systemWait until both clusters are healthy before moving on to step 3.
kubectl get cluster -n cnpg-system
kubectl get pods -n cnpg-system
NAME AGE INSTANCES READY STATUS PRIMARY
cluster-a 88s 3 3 Cluster in healthy state cluster-a-1
cluster-b 88s 3 3 Cluster in healthy state cluster-b-1
NAME READY STATUS RESTARTS AGE
cluster-a-1 1/1 Running 0 79s
cluster-a-2 1/1 Running 0 56s
cluster-a-3 1/1 Running 0 36s
cluster-b-1 1/1 Running 0 79s
cluster-b-2 1/1 Running 0 58s
cluster-b-3 1/1 Running 0 38s
cnpg-cloudnative-pg-646fcc95-rhj72 1/1 Running 0 2m5s3. Establishing Superuser Access
Spock requires superuser privileges to create nodes and replication sets. In our .yaml file, we created a user named postgres, with superuser privileges (enableSuperuserAccess: true).
Security Note: Be sure you disable superuser access after setup by re-invoking the .yaml file with enableSuperuserAccess: false.
4. Creating Spock Nodes
Nodes must exist before you configure replication; Spock will verify node existence before connecting.
Cluster A Node (spock-node-a-job.yaml)
apiVersion: batch/v1
kind: Job
metadata:
name: spock-node-a
namespace: cnpg-system
spec:
backoffLimit: 1
template:
spec:
restartPolicy: Never
containers:
- name: spock-node-a
image: ghcr.io/pgedge/pgedge-postgres:17-spock5-minimal
env:
- name: PGHOST
value: cluster-a-rw.cnpg-system.svc.cluster.local
- name: PGPORT
value: "5432"
- name: PGDATABASE
value: appdb
- name: PGUSER
value: postgres
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: cluster-a-superuser
key: password
command: ["/bin/sh", "-c"]
args:
- |
psql -v ON_ERROR_STOP=1 -c "SELECT spock.node_create(
node_name := 'node_a',
dsn := 'host=cluster-a-rw.cnpg-system.svc.cluster.local port=5432
dbname=appdb user=postgres password=${PGPASSWORD}'
);"Cluster B Node (spock-node-b-job.yaml)
apiVersion: batch/v1
kind: Job
metadata:
name: spock-node-b
namespace: cnpg-system
spec:
backoffLimit: 1
template:
spec:
restartPolicy: Never
containers:
- name: spock-node-b
image: ghcr.io/pgedge/pgedge-postgres:17-spock5-minimal
env:
- name: PGHOST
value: cluster-b-rw.cnpg-system.svc.cluster.local
- name: PGPORT
value: "5432"
- name: PGDATABASE
value: appdb
- name: PGUSER
value: postgres
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: cluster-b-superuser
key: password
command: ["/bin/sh", "-c"]
args:
- |
psql -v ON_ERROR_STOP=1 -c "SELECT spock.node_create(
node_name := 'node_b',
dsn := 'host=cluster-b-rw.cnpg-system.svc.cluster.local port=5432
dbname=appdb user=postgres password=${PGPASSWORD}'
);"After creating the .yaml files, use the following commands to create the nodes.
kubectl apply -f spock-node-a-job.yaml
kubectl apply -f spock-node-b-job.yaml5. Defining Bi-Directional Replication
Spock's bi-directional replication allows each node to act as both a publication node and a subscriber node; the following .yaml files create the bi-directional subscriptions between clusters.
Cluster A → B (spock-repl-a-job.yaml)
apiVersion: batch/v1
kind: Job
metadata:
name: spock-repl-a
namespace: cnpg-system
spec:
template:
spec:
restartPolicy: Never
containers:
- name: setup-repl-a
image: ghcr.io/pgedge/pgedge-postgres:17-spock5-minimal
command: ["/bin/sh", "-c"]
args:
- |
set -euo pipefail
export PGPASSWORD=$(cat /secrets/a/password)
# Create test table
psql -h cluster-a-rw.cnpg-system.svc.cluster.local -U postgres -d appdb -c "
CREATE TABLE IF NOT EXISTS test_repl(
id SERIAL PRIMARY KEY,
msg TEXT
);
"
# Add table to replication set
psql -h cluster-a-rw.cnpg-system.svc.cluster.local -U postgres -d appdb -c "
SELECT spock.repset_add_table('default', 'test_repl'::regclass, true);
"
# Get cluster-b password
B_PASS=$(cat /secrets/b/password)
# Create subscription to cluster-b
psql -h cluster-a-rw.cnpg-system.svc.cluster.local -U postgres -d appdb -c "
SELECT spock.sub_create(
subscription_name := 'sub_a_to_b',
provider_dsn := 'host=cluster-b-rw.cnpg-system.svc.cluster.local port=5432
dbname=appdb user=postgres password=${B_PASS}',
replication_sets := ARRAY['default'],
synchronize_data := true,
forward_origins := '{}'
);
"
# Enable subscription
psql -h cluster-a-rw.cnpg-system.svc.cluster.local -U postgres -d appdb -c "
SELECT spock.sub_enable('sub_a_to_b', true);
"
volumeMounts:
- name: secret-a
mountPath: /secrets/a
readOnly: true
- name: secret-b
mountPath: /secrets/b
readOnly: true
volumes:
- name: secret-a
secret:
secretName: cluster-a-superuser
items:
- key: password
path: password
- name: secret-b
secret:
secretName: cluster-b-superuser
items:
- key: password
path: passwordCluster B → A (spock-repl-b-job.yaml)
apiVersion: batch/v1
kind: Job
metadata:
name: spock-repl-b
namespace: cnpg-system
spec:
template:
spec:
restartPolicy: Never
containers:
- name: setup-repl-b
image: ghcr.io/pgedge/pgedge-postgres:17-spock5-minimal
command: ["/bin/sh", "-c"]
args:
- |
set -euo pipefail
export PGPASSWORD=$(cat /secrets/b/password)
# Create test table
psql -h cluster-b-rw.cnpg-system.svc.cluster.local -U postgres -d appdb -c "
CREATE TABLE IF NOT EXISTS test_repl(
id SERIAL PRIMARY KEY,
msg TEXT
);
"
# Add table to replication set
psql -h cluster-b-rw.cnpg-system.svc.cluster.local -U postgres -d appdb -c "
SELECT spock.repset_add_table('default', 'test_repl'::regclass, true);
"
# Get cluster-a password
A_PASS=$(cat /secrets/a/password)
# Create subscription to cluster-a
psql -h cluster-b-rw.cnpg-system.svc.cluster.local -U postgres -d appdb -c "
SELECT spock.sub_create(
subscription_name := 'sub_b_to_a',
provider_dsn := 'host=cluster-a-rw.cnpg-system.svc.cluster.local port=5432
dbname=appdb user=postgres password=${A_PASS}',
replication_sets := ARRAY['default'],
synchronize_data := true,
forward_origins := '{}'
);
"
# Enable subscription
psql -h cluster-b-rw.cnpg-system.svc.cluster.local -U postgres -d appdb -c "
SELECT spock.sub_enable('sub_b_to_a', true);
"
volumeMounts:
- name: secret-a
mountPath: /secrets/a
readOnly: true
- name: secret-b
mountPath: /secrets/b
readOnly: true
volumes:
- name: secret-a
secret:
secretName: cluster-a-superuser
items:
- key: password
path: password
- name: secret-b
secret:
secretName: cluster-b-superuser
items:
- key: password
path: passwordUse the following commands to execute the .yaml files and establish bi-directional replication between the two nodes:
kubectl apply -f spock-repl-a-job.yaml
kubectl apply -f spock-repl-b-job.yaml6. Testing Bi-Directional Replication
The following commands connect to each node with psql and exercise replication to demonstrate that rows added on node 1 are replicated to node 2 and rows added on node 2 are replicated to node 1:
kubectl exec -it -n cnpg-system cluster-a-1 -- psql -U postgres -d appdb -c "INSERT
INTO test_repl (id, msg) VALUES (1, 'hello from A');"
kubectl exec -it -n cnpg-system cluster-b-1 -- psql -U postgres -d appdb -c "SELECT *
FROM test_repl;"
kubectl exec -it -n cnpg-system cluster-b-1 -- psql -U postgres -d appdb -c "INSERT
INTO test_repl (id, msg) VALUES (2, 'hello from B');"
kubectl exec -it -n cnpg-system cluster-a-1 -- psql -U postgres -d appdb -c "SELECT *
FROM test_repl;"In both cases, rows inserted in one node appear in the other, confirming active-active replication.
Key Takeaways
Superuser Access: Required for node creation and replication setup; remove after setup for security
Declarative Configuration: CNPG ensures settings persist and prevents manual postgresql.conf changes from being overwritten. If you are trying to change postgresql.conf inside the entrypoint script of docker image, CNPG will override the configuration.
Separate shared_preload_libraries parameter: CNPG has a separate parameter for postgresql.shared_preload_libraries so don’t modify it in the postgresql.conf file.
Initialization Scripts: Use postInitApplicationSQL for database-specific extensions.
Node Creation: Nodes must exist before replication; Spock helps by validating node availability.


