Skip to content

Running a Local Cluster with Kind

Kind runs Kubernetes clusters inside containers. Each cluster node — control plane or worker — is a container on your machine. This makes it fast to create, cheap to run, and simple to tear down.

Kind works with Docker and Podman. Docker is the smoother path; Podman works well but needs extra setup on Linux (cgroup delegation). Both are covered below.

By the end of this tutorial you’ll have:

Multi-node cluster

1 control-plane node + 2 workers, all running locally as containers.

HA PostgreSQL

A 3-instance CloudNativePG cluster with automatic failover and streaming replication.

Node.js app

A web app recording visits to a database, deployed as 2 replicas with health checks.

Caddy reverse proxy

A lightweight reverse proxy fronting the app, exposed via NodePort.

You need a container runtime (Docker or Podman), kind, and kubectl. The sections below cover installation for each platform.

Install Docker Desktop and start it. Verify:

Terminal window
docker version
docker run --rm hello-world

The easiest way is through mise, which manages tool versions per-project:

Install mise if you don’t have it:

Terminal window
# macOS
brew install mise
# Linux (cargo)
cargo install cargo-binstall && cargo binstall mise
# Linux (curl)
curl https://mise.run | sh

Then install kind and kubectl:

Terminal window
mise use --global kind@latest kubectl@latest

Verify both:

Terminal window
kind version
kubectl version --client

Kind runs kubelet inside containers. On Linux with cgroups v2 (the default on modern distributions), rootless Podman needs explicit cgroup delegation so kubelet can manage resources inside the node containers.

  1. Create the systemd drop-in directory

    Terminal window
    sudo mkdir -p /etc/systemd/system/user@.service.d
  2. Write the delegation config

    Terminal window
    printf '[Service]\nDelegate=cpu cpuset io memory pids\n' \
    | sudo tee /etc/systemd/system/user@.service.d/delegate.conf
  3. Reload systemd

    Terminal window
    sudo systemctl daemon-reload

This is a one-time setup. The drop-in tells systemd to delegate cgroup controllers to user sessions, which kubelet needs to set resource limits on pods.

Kind defaults to Docker. Tell it to use Podman by setting an environment variable:

Terminal window
export KIND_EXPERIMENTAL_PROVIDER=podman

Add this to your shell profile (~/.bashrc, ~/.zshrc, or ~/.config/fish/config.fish) so it persists.

  1. Set up the directory structure

    Terminal window
    mkdir -p kind-tutorial/app kind-tutorial/k8s
    cd kind-tutorial
    • Directorykind-tutorial/ - app/ - Dockerfile - package.json - server.js - k8s/ - 00-namespace.yaml - 01-postgres-cluster.yaml - 02-app.yaml - 03-caddy.yaml - kind-multinode.yaml
  2. Write the cluster configuration

    Create kind-multinode.yaml at the project root:

    kind-multinode.yaml
    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    nodes:
    - role: control-plane
    - role: worker
    - role: worker

    This gives you one control-plane node and two workers — enough to see pod scheduling across nodes and test node failure scenarios.

Terminal window
kind create cluster --config kind-multinode.yaml

Kind automatically sets your kubectl context. Check that all nodes are Ready:

Terminal window
kubectl get nodes

Expected output:

NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 45s v1.32.0
kind-worker Ready <none> 30s v1.32.0
kind-worker2 Ready <none> 30s v1.32.0

The demo app is a Node.js server that records page visits to PostgreSQL and displays them.

  1. Create app/package.json

    app/package.json
    {
    "name": "hello-k8s",
    "version": "1.0.0",
    "private": true,
    "main": "server.js",
    "scripts": {
    "start": "node server.js"
    },
    "dependencies": {
    "pg": "^8.13.0"
    }
    }
  2. Create app/server.js

    app/server.js
    const http = require("http");
    const os = require("os");
    const { Pool } = require("pg");
    const pool = new Pool({
    host: process.env.PGHOST || "postgres-ha-rw",
    port: parseInt(process.env.PGPORT || "5432"),
    user: process.env.PGUSER || "app",
    password: process.env.PGPASSWORD || "app",
    database: process.env.PGDATABASE || "app",
    });
    async function init() {
    await pool.query(`
    CREATE TABLE IF NOT EXISTS visits (
    id SERIAL PRIMARY KEY,
    ts TIMESTAMPTZ DEFAULT NOW(),
    pod TEXT
    )
    `);
    }
    const server = http.createServer(async (req, res) => {
    if (req.url === "/healthz") {
    try {
    await pool.query("SELECT 1");
    res.writeHead(200).end("ok");
    } catch {
    res.writeHead(503).end("db unreachable");
    }
    return;
    }
    try {
    const pod = os.hostname();
    await pool.query("INSERT INTO visits (pod) VALUES ($1)", [pod]);
    const { rows: countRows } = await pool.query("SELECT COUNT(*)::int AS n FROM visits");
    const { rows: recent } = await pool.query(
    "SELECT pod, ts FROM visits ORDER BY id DESC LIMIT 5",
    );
    res.writeHead(200, { "Content-Type": "text/html" });
    res.end(`<!DOCTYPE html>
    <html><body>
    <h1>Hello from <code>${pod}</code></h1>
    <p>Total visits: <strong>${countRows[0].n}</strong></p>
    <h3>Recent visits</h3>
    <table>
    <tr><th>Pod</th><th>Time</th></tr>
    ${recent.map((r) => `<tr><td>${r.pod}</td><td>${r.ts}</td></tr>`).join("")}
    </table>
    </body></html>`);
    } catch (e) {
    res.writeHead(500).end("Error: " + e.message);
    }
    });
    init()
    .then(() => server.listen(3000, () => console.log("listening :3000")))
    .catch((e) => {
    console.error("init failed:", e);
    process.exit(1);
    });
  3. Create app/Dockerfile

    # app/Dockerfile
    FROM node:22-alpine
    WORKDIR /app
    COPY package.json ./
    RUN npm install --omit=dev
    COPY server.js ./
    USER node
    EXPOSE 3000
    CMD ["node", "server.js"]
Terminal window
docker build -t localhost/hello-k8s:latest app/

Kind nodes run their own containerd, separate from your host. You need to explicitly load images into the cluster.

Terminal window
kind load docker-image localhost/hello-k8s:latest
  1. Namespacek8s/00-namespace.yaml

    k8s/00-namespace.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
    name: demo
  2. PostgreSQL HA clusterk8s/01-postgres-cluster.yaml

    This uses the CloudNativePG operator to create a 3-instance PostgreSQL cluster with automatic failover.

    k8s/01-postgres-cluster.yaml
    apiVersion: postgresql.cnpg.io/v1
    kind: Cluster
    metadata:
    name: postgres-ha
    namespace: demo
    spec:
    instances: 3
    postgresql:
    parameters:
    max_connections: "100"
    shared_buffers: "128MB"
    bootstrap:
    initdb:
    database: app
    owner: app
    storage:
    size: 1Gi
  3. Application deploymentk8s/02-app.yaml

    Two replicas of the Node.js app. Credentials come from the secret that CloudNativePG creates automatically.

    k8s/02-app.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: hello-app
    namespace: demo
    spec:
    replicas: 2
    selector:
    matchLabels:
    app: hello-app
    template:
    metadata:
    labels:
    app: hello-app
    spec:
    containers:
    - name: app
    image: localhost/hello-k8s:latest
    imagePullPolicy: Never
    ports:
    - containerPort: 3000
    env:
    - name: PGHOST
    value: postgres-ha-rw
    - name: PGPORT
    value: "5432"
    - name: PGDATABASE
    value: app
    - name: PGUSER
    valueFrom:
    secretKeyRef:
    name: postgres-ha-app
    key: username
    - name: PGPASSWORD
    valueFrom:
    secretKeyRef:
    name: postgres-ha-app
    key: password
    readinessProbe:
    httpGet:
    path: /healthz
    port: 3000
    periodSeconds: 5
    livenessProbe:
    httpGet:
    path: /healthz
    port: 3000
    periodSeconds: 10
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: hello-app
    namespace: demo
    spec:
    selector:
    app: hello-app
    ports:
    - port: 3000
    targetPort: 3000
  4. Caddy reverse proxyk8s/03-caddy.yaml

    k8s/03-caddy.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: caddyfile
    namespace: demo
    data:
    Caddyfile: |
    :80 {
    reverse_proxy hello-app:3000
    }
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: caddy
    namespace: demo
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: caddy
    template:
    metadata:
    labels:
    app: caddy
    spec:
    containers:
    - name: caddy
    image: caddy:2-alpine
    ports:
    - containerPort: 80
    volumeMounts:
    - name: caddyfile
    mountPath: /etc/caddy/Caddyfile
    subPath: Caddyfile
    readinessProbe:
    httpGet:
    path: /
    port: 80
    periodSeconds: 5
    volumes:
    - name: caddyfile
    configMap:
    name: caddyfile
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: caddy
    namespace: demo
    spec:
    type: NodePort
    selector:
    app: caddy
    ports:
    - port: 80
    targetPort: 80
    nodePort: 30080
  1. Install the CloudNativePG operator

    Terminal window
    kubectl apply --server-side \
    -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.25/releases/cnpg-1.25.1.yaml

    Wait for the controller to become available:

    Terminal window
    kubectl wait deployment/cnpg-controller-manager \
    -n cnpg-system \
    --for=condition=Available \
    --timeout=120s
  2. Apply the manifests

    Terminal window
    kubectl apply -f k8s/
  3. Wait for PostgreSQL to come up

    The CNPG operator creates pods one at a time — primary first, then replicas. This takes a few minutes.

    Terminal window
    kubectl wait pod \
    -n demo \
    -l cnpg.io/cluster=postgres-ha \
    --for=condition=Ready \
    --timeout=300s
  4. Wait for application rollouts

    Terminal window
    kubectl rollout status deployment/hello-app -n demo --timeout=120s
    kubectl rollout status deployment/caddy -n demo --timeout=60s
  5. Verify everything is running

    Terminal window
    kubectl get all -n demo

    You should see 3 PostgreSQL pods, 2 hello-app pods, and 1 Caddy pod — all Running and Ready.

Port-forward the Caddy service to your local machine:

Terminal window
kubectl port-forward -n demo svc/caddy 8080:80

Open http://localhost:8080 in your browser. Each refresh records a visit to PostgreSQL and shows which pod served the request. Refresh a few times to see requests distribute across the two app replicas.

One of the advantages of CloudNativePG is automatic failover. Delete the primary PostgreSQL pod and watch the operator promote a replica:

Terminal window
# Find the primary
kubectl get pods -n demo -l cnpg.io/cluster=postgres-ha,role=primary
# Delete it
kubectl delete pod -n demo -l cnpg.io/cluster=postgres-ha,role=primary
# Watch the failover
kubectl get pods -n demo -l cnpg.io/cluster=postgres-ha -w

Within seconds, one of the replicas becomes the new primary. The app stays available because it connects through the postgres-ha-rw service, which the operator updates to point at the new primary.

Delete the cluster when you’re done:

Terminal window
kind delete cluster

This removes all containers, networks, and volumes associated with the cluster. Nothing persists.

Pods stuck in Pending

Check node status with kubectl get nodes. If nodes show NotReady, the kubelet inside the Kind containers may not have started. On Linux with Podman, this almost always means cgroup delegation is missing — revisit the Configure Podman section.

ErrImageNeverPull

The image wasn’t loaded into the Kind cluster. Re-run the image loading step. With Podman, always use podman save + kind load image-archive rather than kind load docker-image.

Connection refused on port 8080

Make sure the kubectl port-forward command is still running. It blocks the terminal — use a separate shell for other commands. If the Caddy pod isn’t ready, check its logs: kubectl logs -n demo -l app=caddy.

PostgreSQL pods in CrashLoopBackOff

The CNPG operator may not be installed yet. Check with kubectl get deployment -n cnpg-system. If it’s missing, install it (step 1 of Deploy the stack) and re-apply the manifests.