Multi-node cluster
1 control-plane node + 2 workers, all running locally as containers.
Kind runs Kubernetes clusters inside containers. Each cluster node — control plane or worker — is a container on your machine. This makes it fast to create, cheap to run, and simple to tear down.
Kind works with Docker and Podman. Docker is the smoother path; Podman works well but needs extra setup on Linux (cgroup delegation). Both are covered below.
By the end of this tutorial you’ll have:
Multi-node cluster
1 control-plane node + 2 workers, all running locally as containers.
HA PostgreSQL
A 3-instance CloudNativePG cluster with automatic failover and streaming replication.
Node.js app
A web app recording visits to a database, deployed as 2 replicas with health checks.
Caddy reverse proxy
A lightweight reverse proxy fronting the app, exposed via NodePort.
You need a container runtime (Docker or Podman), kind, and kubectl. The sections below cover installation for each platform.
Install Docker Desktop and start it. Verify:
docker versiondocker run --rm hello-worldInstall Docker Engine via your package manager. On Fedora/RHEL:
sudo dnf install docker-ce docker-ce-cli containerd.iosudo systemctl enable --now dockersudo usermod -aG docker $USERLog out and back in for the group change, then verify:
docker versiondocker run --rm hello-worldbrew install podmanpodman machine initpodman machine startVerify:
podman versionpodman run --rm hello-worldPodman ships by default on Fedora and RHEL. On other distributions:
# Debian/Ubuntusudo apt install podman
# Archsudo pacman -S podmanVerify:
podman versionpodman run --rm hello-worldThe easiest way is through mise, which manages tool versions per-project:
Install mise if you don’t have it:
# macOSbrew install mise
# Linux (cargo)cargo install cargo-binstall && cargo binstall mise
# Linux (curl)curl https://mise.run | shThen install kind and kubectl:
mise use --global kind@latest kubectl@latestbrew install kind kubectl# kindcurl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64chmod +x ./kindsudo mv ./kind /usr/local/bin/kind
# kubectlcurl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"chmod +x kubectlsudo mv kubectl /usr/local/bin/kubectlVerify both:
kind versionkubectl version --clientKind runs kubelet inside containers. On Linux with cgroups v2 (the default on modern distributions), rootless Podman needs explicit cgroup delegation so kubelet can manage resources inside the node containers.
Create the systemd drop-in directory
sudo mkdir -p /etc/systemd/system/user@.service.dWrite the delegation config
printf '[Service]\nDelegate=cpu cpuset io memory pids\n' \ | sudo tee /etc/systemd/system/user@.service.d/delegate.confReload systemd
sudo systemctl daemon-reloadThis is a one-time setup. The drop-in tells systemd to delegate cgroup controllers to user sessions, which kubelet needs to set resource limits on pods.
Kind defaults to Docker. Tell it to use Podman by setting an environment variable:
export KIND_EXPERIMENTAL_PROVIDER=podmanAdd this to your shell profile (~/.bashrc, ~/.zshrc, or ~/.config/fish/config.fish) so it persists.
Set up the directory structure
mkdir -p kind-tutorial/app kind-tutorial/k8scd kind-tutorialWrite the cluster configuration
Create kind-multinode.yaml at the project root:
kind: ClusterapiVersion: kind.x-k8s.io/v1alpha4nodes: - role: control-plane - role: worker - role: workerThis gives you one control-plane node and two workers — enough to see pod scheduling across nodes and test node failure scenarios.
kind create cluster --config kind-multinode.yamlsystemd-run --scope --user -p Delegate=yes \ kind create cluster --config kind-multinode.yamlkind create cluster --config kind-multinode.yamlPodman Desktop on macOS runs containers inside a Linux VM, which handles cgroup delegation internally. No systemd-run wrapper needed.
Kind automatically sets your kubectl context. Check that all nodes are Ready:
kubectl get nodesExpected output:
NAME STATUS ROLES AGE VERSIONkind-control-plane Ready control-plane 45s v1.32.0kind-worker Ready <none> 30s v1.32.0kind-worker2 Ready <none> 30s v1.32.0The demo app is a Node.js server that records page visits to PostgreSQL and displays them.
Create app/package.json
{ "name": "hello-k8s", "version": "1.0.0", "private": true, "main": "server.js", "scripts": { "start": "node server.js" }, "dependencies": { "pg": "^8.13.0" }}Create app/server.js
const http = require("http");const os = require("os");const { Pool } = require("pg");
const pool = new Pool({ host: process.env.PGHOST || "postgres-ha-rw", port: parseInt(process.env.PGPORT || "5432"), user: process.env.PGUSER || "app", password: process.env.PGPASSWORD || "app", database: process.env.PGDATABASE || "app",});
async function init() { await pool.query(` CREATE TABLE IF NOT EXISTS visits ( id SERIAL PRIMARY KEY, ts TIMESTAMPTZ DEFAULT NOW(), pod TEXT ) `);}
const server = http.createServer(async (req, res) => { if (req.url === "/healthz") { try { await pool.query("SELECT 1"); res.writeHead(200).end("ok"); } catch { res.writeHead(503).end("db unreachable"); } return; }
try { const pod = os.hostname(); await pool.query("INSERT INTO visits (pod) VALUES ($1)", [pod]); const { rows: countRows } = await pool.query("SELECT COUNT(*)::int AS n FROM visits"); const { rows: recent } = await pool.query( "SELECT pod, ts FROM visits ORDER BY id DESC LIMIT 5", );
res.writeHead(200, { "Content-Type": "text/html" }); res.end(`<!DOCTYPE html><html><body> <h1>Hello from <code>${pod}</code></h1> <p>Total visits: <strong>${countRows[0].n}</strong></p> <h3>Recent visits</h3> <table> <tr><th>Pod</th><th>Time</th></tr> ${recent.map((r) => `<tr><td>${r.pod}</td><td>${r.ts}</td></tr>`).join("")} </table></body></html>`); } catch (e) { res.writeHead(500).end("Error: " + e.message); }});
init() .then(() => server.listen(3000, () => console.log("listening :3000"))) .catch((e) => { console.error("init failed:", e); process.exit(1); });Create app/Dockerfile
# app/DockerfileFROM node:22-alpineWORKDIR /appCOPY package.json ./RUN npm install --omit=devCOPY server.js ./USER nodeEXPOSE 3000CMD ["node", "server.js"]docker build -t localhost/hello-k8s:latest app/podman build -t localhost/hello-k8s:latest app/Kind nodes run their own containerd, separate from your host. You need to explicitly load images into the cluster.
kind load docker-image localhost/hello-k8s:latestpodman save localhost/hello-k8s:latest -o /tmp/hello-k8s.tarkind load image-archive /tmp/hello-k8s.tarrm -f /tmp/hello-k8s.tarNamespace — k8s/00-namespace.yaml
apiVersion: v1kind: Namespacemetadata: name: demoPostgreSQL HA cluster — k8s/01-postgres-cluster.yaml
This uses the CloudNativePG operator to create a 3-instance PostgreSQL cluster with automatic failover.
apiVersion: postgresql.cnpg.io/v1kind: Clustermetadata: name: postgres-ha namespace: demospec: instances: 3 postgresql: parameters: max_connections: "100" shared_buffers: "128MB" bootstrap: initdb: database: app owner: app storage: size: 1GiApplication deployment — k8s/02-app.yaml
Two replicas of the Node.js app. Credentials come from the secret that CloudNativePG creates automatically.
apiVersion: apps/v1kind: Deploymentmetadata: name: hello-app namespace: demospec: replicas: 2 selector: matchLabels: app: hello-app template: metadata: labels: app: hello-app spec: containers: - name: app image: localhost/hello-k8s:latest imagePullPolicy: Never ports: - containerPort: 3000 env: - name: PGHOST value: postgres-ha-rw - name: PGPORT value: "5432" - name: PGDATABASE value: app - name: PGUSER valueFrom: secretKeyRef: name: postgres-ha-app key: username - name: PGPASSWORD valueFrom: secretKeyRef: name: postgres-ha-app key: password readinessProbe: httpGet: path: /healthz port: 3000 periodSeconds: 5 livenessProbe: httpGet: path: /healthz port: 3000 periodSeconds: 10---apiVersion: v1kind: Servicemetadata: name: hello-app namespace: demospec: selector: app: hello-app ports: - port: 3000 targetPort: 3000Caddy reverse proxy — k8s/03-caddy.yaml
apiVersion: v1kind: ConfigMapmetadata: name: caddyfile namespace: demodata: Caddyfile: | :80 { reverse_proxy hello-app:3000 }---apiVersion: apps/v1kind: Deploymentmetadata: name: caddy namespace: demospec: replicas: 1 selector: matchLabels: app: caddy template: metadata: labels: app: caddy spec: containers: - name: caddy image: caddy:2-alpine ports: - containerPort: 80 volumeMounts: - name: caddyfile mountPath: /etc/caddy/Caddyfile subPath: Caddyfile readinessProbe: httpGet: path: / port: 80 periodSeconds: 5 volumes: - name: caddyfile configMap: name: caddyfile---apiVersion: v1kind: Servicemetadata: name: caddy namespace: demospec: type: NodePort selector: app: caddy ports: - port: 80 targetPort: 80 nodePort: 30080Install the CloudNativePG operator
kubectl apply --server-side \ -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.25/releases/cnpg-1.25.1.yamlWait for the controller to become available:
kubectl wait deployment/cnpg-controller-manager \ -n cnpg-system \ --for=condition=Available \ --timeout=120sApply the manifests
kubectl apply -f k8s/Wait for PostgreSQL to come up
The CNPG operator creates pods one at a time — primary first, then replicas. This takes a few minutes.
kubectl wait pod \ -n demo \ -l cnpg.io/cluster=postgres-ha \ --for=condition=Ready \ --timeout=300sWait for application rollouts
kubectl rollout status deployment/hello-app -n demo --timeout=120skubectl rollout status deployment/caddy -n demo --timeout=60sVerify everything is running
kubectl get all -n demoYou should see 3 PostgreSQL pods, 2 hello-app pods, and 1 Caddy pod — all Running and Ready.
Port-forward the Caddy service to your local machine:
kubectl port-forward -n demo svc/caddy 8080:80Open http://localhost:8080 in your browser. Each refresh records a visit to PostgreSQL and shows which pod served the request. Refresh a few times to see requests distribute across the two app replicas.
One of the advantages of CloudNativePG is automatic failover. Delete the primary PostgreSQL pod and watch the operator promote a replica:
# Find the primarykubectl get pods -n demo -l cnpg.io/cluster=postgres-ha,role=primary
# Delete itkubectl delete pod -n demo -l cnpg.io/cluster=postgres-ha,role=primary
# Watch the failoverkubectl get pods -n demo -l cnpg.io/cluster=postgres-ha -wWithin seconds, one of the replicas becomes the new primary. The app stays available because it connects through the postgres-ha-rw service, which the operator updates to point at the new primary.
Delete the cluster when you’re done:
kind delete clusterThis removes all containers, networks, and volumes associated with the cluster. Nothing persists.
Pods stuck in Pending
Check node status with kubectl get nodes. If nodes show NotReady, the kubelet inside the
Kind containers may not have started. On Linux with Podman, this almost always means cgroup
delegation is missing — revisit the Configure Podman section.
ErrImageNeverPull
The image wasn’t loaded into the Kind cluster. Re-run the image loading step. With Podman,
always use podman save + kind load image-archive rather than kind load docker-image.
Connection refused on port 8080
Make sure the kubectl port-forward command is still running. It blocks the terminal — use a
separate shell for other commands. If the Caddy pod isn’t ready, check its logs: kubectl logs -n demo -l app=caddy.
PostgreSQL pods in CrashLoopBackOff
The CNPG operator may not be installed yet. Check with kubectl get deployment -n cnpg-system.
If it’s missing, install it (step 1 of Deploy the stack) and re-apply the
manifests.