Skip to content

Development Workflow

A single Kubernetes cluster can serve every stage from local development to production. This tutorial walks through a four-environment promotion strategy, the repository layout that supports it, and the daily workflow that ties everything together.

EnvironmentClusterPurposeDomain pattern
DevLocal k0s (single node)Rapid iteration, all ports on localhost*.dev.local
Local StagingLocal k0sMirror remote staging, test with real TLS*.staging.local
Remote StagingRemote k0s (DigitalOcean)Pre-production soak test (~1 week)*.staging.example.com
Remote ProductionSame remote cluster, different namespaceClient-facing*.example.com

Code moves through these environments in order: dev, local staging, remote staging, then production. Nothing reaches production without sitting in remote staging for about a week.

Dev gives you fast feedback. Local staging catches manifest errors before they hit a remote cluster. Remote staging proves the deployment works with real DNS, real TLS, and real resource constraints. Production is production.

When you work at a cafe, you do not want your services exposed to the local network. Bind everything to 127.0.0.1:

Terminal window
kubectl port-forward svc/my-app 3000:80 -n dev --address 127.0.0.1

Or use a NodePort service bound to 127.0.0.1 in the k0s config. Either way, traffic stays on your machine.

Local staging uses the same manifests as remote staging with different values: self-signed certificates instead of Let’s Encrypt, NodePort instead of LoadBalancer, local DNS instead of real DNS. The gap between the two should be as small as you can make it.

Remote staging and production run in the same cluster as separate namespaces. This saves money on a second cluster while still providing isolation through network policies and resource quotas.

When would you split into separate clusters? When a compliance mandate requires it, when a client contract demands physical isolation, or when the blast radius of a staging failure is unacceptable.

dev → local staging → remote staging (~1 week soak) → production

Each step is a git commit. Flux CD watches the repository and applies changes automatically.

Use environment-based namespace names, not client-based ones:

apiVersion: v1
kind: Namespace
metadata:
name: staging
labels:
environment: staging
client: acme
billing-group: enterprise
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
environment: production
client: acme
billing-group: enterprise

The labels make namespaces queryable. Find every namespace for a billing group:

Terminal window
kubectl get ns -l billing-group=enterprise

Find every namespace in a given environment:

Terminal window
kubectl get ns -l environment=staging

Keep namespace names short and predictable. Tooling, scripts, and Kustomize overlays all reference these names. Changing them later means changing everything that depends on them.

Two namespaces in one cluster works until it does not. Split into separate clusters when:

  • A compliance framework (SOC 2, PCI-DSS) requires physical separation
  • A client contract mandates dedicated infrastructure
  • You need different Kubernetes versions per environment
  • Staging load could destabilize production even with resource quotas

For this tutorial, one local cluster and one remote cluster cover everything.

The repository follows a Flux CD GitOps layout with Kustomize overlays. Three top-level directories separate concerns:

clusters/
local/
infrastructure.yaml # Flux Kustomization pointing to infrastructure/overlays/local
apps.yaml # Flux Kustomization pointing to apps/overlays/dev
remote/
infrastructure.yaml # Flux Kustomization pointing to infrastructure/overlays/remote
apps-staging.yaml # Flux Kustomization pointing to apps/overlays/staging
apps-production.yaml # Flux Kustomization pointing to apps/overlays/production
infrastructure/
base/ # Shared infra: cert-manager, Traefik, monitoring
overlays/
local/ # Self-signed certs, NodePort, local DNS
remote/ # Let's Encrypt, LoadBalancer, real DNS
apps/
base/ # App manifests: Deployment, Service, IngressRoute
overlays/
dev/ # Dev-specific patches (debug logging, relaxed limits)
staging/ # Staging patches (resource quotas, network policies)
production/ # Production patches (tight limits, replicas, HPA)

clusters/ tells Flux what to reconcile. infrastructure/ holds third-party components shared across environments. apps/ holds your application manifests.

Each overlay directory contains a kustomization.yaml that references ../../base and applies patches for that environment. Flux applies them in dependency order: infrastructure first, then apps.

When you add a new Kubernetes resource, decide where it lives:

  1. Managed by Helm? Place it in charts/<app>/templates/. See Helm Charts for guidance.
  2. Third-party infrastructure? (cert-manager CRDs, Traefik config, monitoring stack) Place it in infrastructure/base/.
  3. Cluster-scoped? (ClusterRole, ClusterRoleBinding, Namespace) Place it in infrastructure/base/.
  4. Varies per environment? Place the base in apps/base/ or infrastructure/base/, then add patches in the relevant overlays/<env>/ directory.
  5. None of the above? Place it in apps/base/.

When in doubt, start in apps/base/. You can always move it later. The important thing is that every manifest lives in exactly one base directory and gets customized through overlays.

Resource quotas prevent staging from starving production. Apply them per namespace:

apiVersion: v1
kind: ResourceQuota
metadata:
name: staging-quota
namespace: staging
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
pods: "20"

Production gets higher limits:

apiVersion: v1
kind: ResourceQuota
metadata:
name: production-quota
namespace: production
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "50"

Without quotas, a runaway staging deployment can consume all cluster resources. Quotas make that impossible. See the Kubernetes documentation on resource quotas for the full list of resources you can constrain.

By default, any pod can talk to any other pod in the cluster, across namespaces. That means a bug in staging could hit a production database. Fix this with a default-deny policy and explicit allow rules.

Apply this to both staging and production namespaces:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: staging
spec:
podSelector: {}
policyTypes:
- Ingress

This blocks all incoming traffic to every pod in the namespace. Nothing gets in unless another policy explicitly allows it.

Traefik runs in the traefik namespace and needs to reach your application pods. Allow it:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-traefik-ingress
namespace: staging
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik

Apply the same pair of policies (default-deny + allow-traefik) to the production namespace. The result: pods in staging cannot reach pods in production, and vice versa. Only Traefik can route external traffic in. See the Kubernetes network policy documentation for more complex rules.

Start your application in the dev namespace. Use port-forwarding to access it:

Terminal window
kubectl port-forward svc/my-app 3000:80 -n dev --address 127.0.0.1

Iterate on code and manifests. Apply changes directly:

Terminal window
kubectl apply -k apps/overlays/dev

This is the fast loop. No git commits needed. No Flux reconciliation. You apply, you test, you fix, you apply again.

When the feature works in dev, deploy it to local staging:

Terminal window
kubectl apply -k apps/overlays/staging

This uses the same manifests with staging-specific patches: tighter resource limits, network policies, a staging domain. If something breaks here, it would have broken in remote staging too. Fix it now while the feedback loop is short.

Commit your changes and push:

Terminal window
git add apps/
git commit -m "Add widget service"
git push origin main

Flux watches the repository and reconciles apps/overlays/staging on the remote cluster within a few minutes. Check the rollout:

Terminal window
kubectl --context remote get pods -n staging

Let it soak for about a week. Monitor logs, watch for memory leaks, verify that restarts stay at zero.

After the soak period, update the production overlay. This usually means changing the image tag:

Terminal window
# Edit apps/overlays/production/kustomization.yaml to reference the tested image
git add apps/overlays/production/
git commit -m "Promote widget service to production"
git push origin main

Flux applies the change to the production namespace. The same image that ran in staging for a week now runs in production.

Each environment needs its own secrets (database passwords, API keys, TLS certificates). Use SOPS with age encryption to store them safely in Git. Each cluster gets its own age key. SOPS supports multiple recipients, so you can encrypt a secret to both the local and remote cluster keys at once.

The workflow stays the same across environments: commit encrypted secrets to Git, Flux decrypts them at reconciliation time, plaintext never enters the repository.

After setting up this workflow:

  • Four environments with clear boundaries and a defined promotion path
  • Namespace isolation through network policies and resource quotas
  • A repository structure where every manifest has a clear home
  • A daily workflow that moves from fast local iteration to production deployment through git commits

The k0s installation guide covers setting up the local cluster. Kubernetes Fundamentals explains the core objects (Pods, Deployments, Services) that your manifests define. Flux CD handles the GitOps reconciliation. From here, you add applications to apps/base/, write overlays for each environment, and let the promotion flow carry them to production.