Skip to content

PostgreSQL with CloudNativePG

Helm charts like Bitnami’s PostgreSQL give you a StatefulSet and leave the rest to you — failover, backup scheduling, replica lag monitoring. You write the runbooks, you handle the 2 AM pages.

CloudNativePG (CNPG) is a Kubernetes operator. It watches your PostgreSQL instances, promotes replicas when the primary fails, manages WAL archiving, and exposes metrics. The operator handles the operational work that Helm charts push onto you.

For anything beyond a dev database, use the operator. See Helm vs Kustomize for more on packaging decisions.

Add the CNPG Helm repo and install the operator into its own namespace:

Terminal window
helm repo add cnpg https://cloudnative-pg.github.io/charts
helm repo update
helm install cnpg-operator cnpg/cloudnative-pg \
--namespace cnpg-system \
--create-namespace

Wait for the operator deployment to roll out:

Terminal window
kubectl rollout status deployment -n cnpg-system cnpg-cloudnative-pg

The operator installs CRDs for Cluster, Backup, ScheduledBackup, and several others. Once the deployment is ready, you can create PostgreSQL clusters.

This manifest creates one primary and two read replicas (three instances total):

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: my-postgres
namespace: default
spec:
instances: 3
imageName: ghcr.io/cloudnative-pg/postgresql:17.2-1
bootstrap:
initdb:
database: app
owner: app
storage:
size: 10Gi
walStorage:
size: 2Gi
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
enableSuperuserAccess: true

Apply it:

Terminal window
kubectl apply -f cluster.yaml

Watch the pods come up:

Terminal window
kubectl get pods -l cnpg.io/cluster=my-postgres -w

CNPG creates three pods: my-postgres-1 (primary), my-postgres-2, and my-postgres-3 (replicas). The controller elects the primary and configures streaming replication to the replicas.

CNPG creates three services for every cluster:

ServiceDNS namePurpose
my-postgres-rwmy-postgres-rw.default.svcPrimary only — read/write traffic
my-postgres-romy-postgres-ro.default.svcReplicas only — read-only queries
my-postgres-rmy-postgres-r.default.svcAny instance — primary or replica

Point your application at -rw for writes and -ro for reads. The services track the current primary, so after a failover your connection strings stay the same.

CNPG creates a secret named my-postgres-app containing everything an application needs to connect:

KeyValue
usernameapp
password(auto-generated)
hostmy-postgres-rw.default.svc
port5432
dbnameapp
uriFull postgresql:// connection string
jdbc-uriJDBC connection string
pgpass.pgpass formatted entry

Mount this secret as environment variables in your application pods. See Node Application for a worked example.

With enableSuperuserAccess: true, CNPG also creates a my-postgres-superuser secret with the postgres superuser credentials.

Port-forward to the primary service:

Terminal window
kubectl port-forward svc/my-postgres-rw 5432:5432

Connect with psql using the auto-generated connection string:

Terminal window
psql "$(kubectl get secret my-postgres-app -o jsonpath='{.data.uri}' | base64 -d)"

CNPG monitors every instance through readiness probes and replication status. When the primary fails:

  1. The controller detects the failure and identifies the most up-to-date replica (lowest replication lag).
  2. It promotes that replica to primary.
  3. The -rw service updates its endpoints to point at the new primary.
  4. The former primary restarts and rejoins the cluster as a replica.

Applications connected through the -rw service reconnect to the new primary automatically. The failover typically completes in under 30 seconds.

CNPG supports continuous WAL archiving and base backups to object storage. This ScheduledBackup runs a base backup every night at midnight:

apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
name: daily-backup
spec:
schedule: "0 0 0 * * *"
immediate: true
backupOwnerReference: self
cluster:
name: my-postgres
method: barmanObjectStore

This requires a barmanObjectStore section in the Cluster spec that points to an S3-compatible bucket. See the CNPG backup documentation for the full configuration.

CNPG exposes metrics on each pod at /metrics. Create a PodMonitor so Prometheus scrapes them:

apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: my-postgres-monitor
spec:
selector:
matchLabels:
cnpg.io/cluster: my-postgres
podMetricsEndpoints:
- port: metrics

The CNPG project publishes Grafana dashboards that work with these metrics out of the box.

For GitOps, deploy both the operator and the cluster through Flux CD.

First, create a HelmRepository and HelmRelease for the operator:

apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: cnpg
namespace: flux-system
spec:
interval: 1h
url: https://cloudnative-pg.github.io/charts
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: cnpg-operator
namespace: cnpg-system
spec:
interval: 30m
chart:
spec:
chart: cloudnative-pg
version: ">=0.22.0 <1.0.0"
sourceRef:
kind: HelmRepository
name: cnpg
namespace: flux-system
install:
createNamespace: true

Then add the Cluster manifest to a Kustomization that depends on the operator:

apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: postgres-cluster
namespace: flux-system
spec:
interval: 30m
sourceRef:
kind: GitRepository
name: flux-system
path: ./apps/postgres
prune: true
dependsOn:
- name: cnpg-operator

The dependsOn field ensures Flux installs the operator (and its CRDs) before attempting to create the Cluster resource.

If you store database credentials in Git, encrypt them with SOPS. See Secrets Management for the setup.