Deploying a Node.js Application
Deploying a Node.js Application
Section titled “Deploying a Node.js Application”This tutorial walks through deploying a Node.js app on Kubernetes that connects to the CloudNativePG PostgreSQL cluster. You will build a container image, write a Deployment and Service, wire up database credentials from CNPG secrets, configure health checks, and expose the app through Traefik.
A Minimal Node.js App
Section titled “A Minimal Node.js App”Start with a small Express server that connects to PostgreSQL. The app maintains two connection pools: one pointed at the primary instance for writes, and one pointed at the read-only service for reads. CNPG creates both services automatically.
const express = require("express");const { Pool } = require("pg");
const writePool = new Pool({ connectionString: process.env.DATABASE_URL });const readPool = new Pool({ host: process.env.DATABASE_READ_HOST || process.env.DB_HOST, port: process.env.DB_PORT, user: process.env.DB_USER, password: process.env.DB_PASSWORD, database: process.env.DB_NAME,});
const app = express();
app.get("/healthz", (req, res) => res.json({ status: "ok" }));
app.get("/ready", async (req, res) => { try { await writePool.query("SELECT 1"); res.json({ status: "ready" }); } catch (err) { res.status(503).json({ status: "not ready" }); }});
app.get("/items", async (req, res) => { const result = await readPool.query("SELECT * FROM items ORDER BY id"); res.json(result.rows);});
app.post("/items", async (req, res) => { const { name } = req.body; const result = await writePool.query("INSERT INTO items (name) VALUES ($1) RETURNING *", [name]); res.status(201).json(result.rows[0]);});
app.listen(3000, () => console.log("Listening on :3000"));The /healthz endpoint returns a simple 200. The /ready endpoint checks the database connection. Kubernetes uses these two endpoints differently, as explained in the health check section below.
Read operations hit the readPool (pointed at the -ro service), and writes hit the writePool (pointed at the -rw service or the full connection URI). This spreads load across replicas without application-level routing logic.
Dockerfile
Section titled “Dockerfile”Build a production image with a multi-stage Dockerfile:
FROM node:22-alpine AS buildWORKDIR /appCOPY package.json package-lock.json ./RUN npm ci --only=productionCOPY . .
FROM node:22-alpineWORKDIR /appRUN addgroup -S appgroup && adduser -S appuser -G appgroupUSER appuserCOPY --from=build /app .EXPOSE 3000CMD ["node", "server.js"]Three things to note:
- Multi-stage build. The first stage installs dependencies. The second stage copies the result into a clean image. This keeps the final image small and avoids shipping build tools.
- Non-root user. The
appuseraccount prevents the process from running as root inside the container. Kubernetes SecurityContext can enforce this cluster-wide, but the Dockerfile should do it regardless. nodedirectly, notnpm start. Runningnode server.jsmeans the Node process receives SIGTERM directly from Kubernetes during pod shutdown.npm startspawns a child process that may not forward signals, causing a 30-second forced kill instead of a graceful shutdown.
Kubernetes Deployment
Section titled “Kubernetes Deployment”The Deployment runs three replicas of the app. Each pod gets database credentials from the CNPG-generated secret and the read-only service hostname.
apiVersion: apps/v1kind: Deploymentmetadata: name: node-app labels: app: node-appspec: replicas: 3 selector: matchLabels: app: node-app template: metadata: labels: app: node-app spec: containers: - name: node-app image: registry.example.com/node-app:1.0.0 ports: - containerPort: 3000 name: http env: - name: DATABASE_URL valueFrom: secretKeyRef: name: my-postgres-app key: uri - name: DATABASE_READ_HOST value: "my-postgres-ro.default.svc" livenessProbe: httpGet: path: /healthz port: http periodSeconds: 15 failureThreshold: 3 readinessProbe: httpGet: path: /ready port: http periodSeconds: 10 failureThreshold: 2 resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "500m" memory: "512Mi"Caution — Avoid
initialDelaySeconds
initialDelaySecondsdelays the first probe by a fixed duration, masking slow starts rather than detecting them. If a container takes longer than expected to start, the delay hides the problem — and if it starts quickly, the delay wastes time. The readiness probe already gates traffic: a failing readiness probe keeps the pod out of Service endpoints without restarting it. For containers with genuinely slow startup (JVM apps, large ML models), use astartupProbeinstead — it runs only during startup and prevents liveness kills until it passes, without hiding failures behind an arbitrary timer.
The DATABASE_URL env var pulls the full connection URI from the CNPG secret (my-postgres-app). That URI includes the host, port, username, password, and database name for the primary instance. DATABASE_READ_HOST points at the read-only service so the readPool in the app routes queries to replicas.
Service
Section titled “Service”A Service gives the pods a stable internal DNS name and load-balances traffic across ready pods:
apiVersion: v1kind: Servicemetadata: name: node-appspec: selector: app: node-app ports: - port: 80 targetPort: httpOther pods in the cluster can reach the app at node-app.default.svc:80. Traefik routes external traffic through this Service.
Traefik IngressRoute
Section titled “Traefik IngressRoute”Expose the app to the internet with a Traefik IngressRoute:
apiVersion: traefik.io/v1alpha1kind: IngressRoutemetadata: name: node-appspec: entryPoints: - websecure routes: - match: Host(`app.example.com`) kind: Rule services: - name: node-app port: 80 tls: certResolver: letsencryptThis routes https://app.example.com to the node-app Service on port 80. The certResolver: letsencrypt field tells Traefik to provision and renew a TLS certificate automatically. See the Traefik tutorial for the full Traefik setup, including the ACME resolver configuration.
Connecting to CNPG Secrets
Section titled “Connecting to CNPG Secrets”CloudNativePG creates a Secret named <cluster>-app for each cluster. It contains host, port, dbname, user, password, uri, and jdbc-uri keys. Three patterns for consuming these credentials:
Individual secretKeyRef (recommended)
Section titled “Individual secretKeyRef (recommended)”Pick specific keys from the secret. Each env var maps to one key:
env: - name: DATABASE_URL valueFrom: secretKeyRef: name: my-postgres-app key: uriThis is the clearest approach. You see every injected value in the Deployment manifest. Debugging credential issues takes one kubectl describe pod command.
envFrom with secretRef
Section titled “envFrom with secretRef”Inject all keys from the secret as env vars at once:
envFrom: - secretRef: name: my-postgres-appShorter, but the env var names match the secret keys (host, port, dbname, etc.), which may not match what your app expects. You lose visibility into what gets injected without inspecting the secret itself.
Volume-mounted secrets
Section titled “Volume-mounted secrets”Mount the secret as files in a directory:
volumes: - name: db-creds secret: secretName: my-postgres-appvolumeMounts: - name: db-creds mountPath: /etc/db-creds readOnly: trueEach key becomes a file (/etc/db-creds/host, /etc/db-creds/password, etc.). Useful when libraries read credentials from files rather than environment variables. Kubernetes also updates mounted secrets automatically when the Secret object changes — env vars require a pod restart.
Health Check Design
Section titled “Health Check Design”Kubernetes uses three types of probes to manage pod lifecycle:
Liveness probe — Is the process alive? The /healthz endpoint returns 200 without checking any dependencies. If this probe fails (three consecutive failures with the config above), Kubernetes restarts the pod. Keep it cheap. A liveness probe that checks the database will restart your app when the database is down, which makes things worse.
Readiness probe — Can the pod serve traffic? The /ready endpoint checks the database connection. If this probe fails, Kubernetes removes the pod from the Service’s endpoint list. No traffic gets routed to it. The pod stays running, and once the database recovers, the probe passes again and traffic resumes. This is the correct response to a downstream outage: stop sending traffic, but do not restart.
Startup probe — For slow-starting apps, add a startup probe. Kubernetes runs only the startup probe until it succeeds, then switches to liveness and readiness. Without it, a slow app might fail its liveness probe before it finishes initializing, causing a restart loop. Node.js apps typically start fast enough that a startup probe is unnecessary, but if your app loads large datasets or runs migrations on boot:
startupProbe: httpGet: path: /healthz port: http failureThreshold: 30 periodSeconds: 2This gives the app up to 60 seconds (30 failures * 2 seconds) to start before Kubernetes considers it broken.
Resource Sizing for Node.js
Section titled “Resource Sizing for Node.js”Node.js runs JavaScript on a single thread. The V8 event loop handles one operation at a time, using async I/O to stay responsive. This affects how you set resource requests and limits.
CPU: A Node.js process rarely needs more than one core. Set requests.cpu to your steady-state usage (often 100m–250m for a typical web server) and limits.cpu to 500m–1000m for bursts during garbage collection or heavy computation. If you need more throughput, scale horizontally with more replicas rather than giving one pod more CPU.
Memory: V8’s heap grows until garbage collection reclaims it. By default, V8 sizes the heap based on available system memory — but inside a container, “available memory” means the container’s limit. Set NODE_OPTIONS to cap the heap below the container limit:
env: - name: NODE_OPTIONS value: "--max-old-space-size=384"With a 512Mi container limit and a 384MB heap cap, you leave roughly 128MB for the stack, native code, and buffers. Without this cap, V8 may try to use the full 512MB for the heap, triggering an OOM kill when native allocations push total RSS over the limit.
Requests vs limits: requests is what Kubernetes guarantees. The scheduler uses it to place pods. limits is the ceiling. Set requests to your steady-state usage and limits to your expected peak. If requests and limits are equal (a “Guaranteed” QoS class), the pod gets predictable performance but cannot burst.
Verifying the Deployment
Section titled “Verifying the Deployment”Apply the manifests and confirm everything is running:
kubectl apply -f deployment.yaml -f service.yaml -f ingressroute.yamlCheck that the pods start and pass their readiness probes:
kubectl get pods -l app=node-appAll three replicas should show 1/1 READY. If a pod is stuck at 0/1, check its logs:
kubectl logs -l app=node-app -fPort-forward to the Service and test the health endpoints:
kubectl port-forward svc/node-app 3000:80curl http://localhost:3000/healthzcurl http://localhost:3000/readyBoth should return JSON with status: ok and status: ready respectively. If /ready returns 503, the app cannot reach the database — check that the CNPG cluster is running and the secret name matches.
Next Steps
Section titled “Next Steps”With the application deployed and connected to PostgreSQL, the next step is making the whole stack resilient. The Full Stack HA tutorial covers pod disruption budgets, topology spread constraints, and database failover testing.
For encrypting the database credentials in Git, see Secrets Management. For customizing the Traefik routing (path-based rules, middleware, rate limiting), see the Traefik tutorial.