Development Workflow
Development Workflow
Section titled “Development Workflow”A single Kubernetes cluster can serve every stage from local development to production. This tutorial walks through a four-environment promotion strategy, the repository layout that supports it, and the daily workflow that ties everything together.
The Four Environments
Section titled “The Four Environments”| Environment | Cluster | Purpose | Domain pattern |
|---|---|---|---|
| Dev | Local k0s (single node) | Rapid iteration, all ports on localhost | *.dev.local |
| Local Staging | Local k0s | Mirror remote staging, test with real TLS | *.staging.local |
| Remote Staging | Remote k0s (DigitalOcean) | Pre-production soak test (~1 week) | *.staging.example.com |
| Remote Production | Same remote cluster, different namespace | Client-facing | *.example.com |
Code moves through these environments in order: dev, local staging, remote staging, then production. Nothing reaches production without sitting in remote staging for about a week.
Why four?
Section titled “Why four?”Dev gives you fast feedback. Local staging catches manifest errors before they hit a remote cluster. Remote staging proves the deployment works with real DNS, real TLS, and real resource constraints. Production is production.
Key Design Decisions
Section titled “Key Design Decisions”Local dev binds to localhost only
Section titled “Local dev binds to localhost only”When you work at a cafe, you do not want your services exposed to the local network. Bind everything to 127.0.0.1:
kubectl port-forward svc/my-app 3000:80 -n dev --address 127.0.0.1Or use a NodePort service bound to 127.0.0.1 in the k0s config. Either way, traffic stays on your machine.
Local staging mirrors remote staging
Section titled “Local staging mirrors remote staging”Local staging uses the same manifests as remote staging with different values: self-signed certificates instead of Let’s Encrypt, NodePort instead of LoadBalancer, local DNS instead of real DNS. The gap between the two should be as small as you can make it.
Remote environments share a cluster
Section titled “Remote environments share a cluster”Remote staging and production run in the same cluster as separate namespaces. This saves money on a second cluster while still providing isolation through network policies and resource quotas.
When would you split into separate clusters? When a compliance mandate requires it, when a client contract demands physical isolation, or when the blast radius of a staging failure is unacceptable.
Promotion flow
Section titled “Promotion flow”dev → local staging → remote staging (~1 week soak) → productionEach step is a git commit. Flux CD watches the repository and applies changes automatically.
Namespace Strategy
Section titled “Namespace Strategy”Use environment-based namespace names, not client-based ones:
apiVersion: v1kind: Namespacemetadata: name: staging labels: environment: staging client: acme billing-group: enterpriseapiVersion: v1kind: Namespacemetadata: name: production labels: environment: production client: acme billing-group: enterpriseThe labels make namespaces queryable. Find every namespace for a billing group:
kubectl get ns -l billing-group=enterpriseFind every namespace in a given environment:
kubectl get ns -l environment=stagingKeep namespace names short and predictable. Tooling, scripts, and Kustomize overlays all reference these names. Changing them later means changing everything that depends on them.
When to split clusters
Section titled “When to split clusters”Two namespaces in one cluster works until it does not. Split into separate clusters when:
- A compliance framework (SOC 2, PCI-DSS) requires physical separation
- A client contract mandates dedicated infrastructure
- You need different Kubernetes versions per environment
- Staging load could destabilize production even with resource quotas
For this tutorial, one local cluster and one remote cluster cover everything.
Repository Structure
Section titled “Repository Structure”The repository follows a Flux CD GitOps layout with Kustomize overlays. Three top-level directories separate concerns:
clusters/ local/ infrastructure.yaml # Flux Kustomization pointing to infrastructure/overlays/local apps.yaml # Flux Kustomization pointing to apps/overlays/dev remote/ infrastructure.yaml # Flux Kustomization pointing to infrastructure/overlays/remote apps-staging.yaml # Flux Kustomization pointing to apps/overlays/staging apps-production.yaml # Flux Kustomization pointing to apps/overlays/productioninfrastructure/ base/ # Shared infra: cert-manager, Traefik, monitoring overlays/ local/ # Self-signed certs, NodePort, local DNS remote/ # Let's Encrypt, LoadBalancer, real DNSapps/ base/ # App manifests: Deployment, Service, IngressRoute overlays/ dev/ # Dev-specific patches (debug logging, relaxed limits) staging/ # Staging patches (resource quotas, network policies) production/ # Production patches (tight limits, replicas, HPA)clusters/ tells Flux what to reconcile. infrastructure/ holds third-party components shared across environments. apps/ holds your application manifests.
Each overlay directory contains a kustomization.yaml that references ../../base and applies patches for that environment. Flux applies them in dependency order: infrastructure first, then apps.
File Placement Decision Tree
Section titled “File Placement Decision Tree”When you add a new Kubernetes resource, decide where it lives:
- Managed by Helm? Place it in
charts/<app>/templates/. See Helm Charts for guidance. - Third-party infrastructure? (cert-manager CRDs, Traefik config, monitoring stack) Place it in
infrastructure/base/. - Cluster-scoped? (ClusterRole, ClusterRoleBinding, Namespace) Place it in
infrastructure/base/. - Varies per environment? Place the base in
apps/base/orinfrastructure/base/, then add patches in the relevantoverlays/<env>/directory. - None of the above? Place it in
apps/base/.
When in doubt, start in apps/base/. You can always move it later. The important thing is that every manifest lives in exactly one base directory and gets customized through overlays.
Resource Quotas
Section titled “Resource Quotas”Resource quotas prevent staging from starving production. Apply them per namespace:
apiVersion: v1kind: ResourceQuotametadata: name: staging-quota namespace: stagingspec: hard: requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi pods: "20"Production gets higher limits:
apiVersion: v1kind: ResourceQuotametadata: name: production-quota namespace: productionspec: hard: requests.cpu: "4" requests.memory: 8Gi limits.cpu: "8" limits.memory: 16Gi pods: "50"Without quotas, a runaway staging deployment can consume all cluster resources. Quotas make that impossible. See the Kubernetes documentation on resource quotas for the full list of resources you can constrain.
Network Policies
Section titled “Network Policies”By default, any pod can talk to any other pod in the cluster, across namespaces. That means a bug in staging could hit a production database. Fix this with a default-deny policy and explicit allow rules.
Default deny all ingress
Section titled “Default deny all ingress”Apply this to both staging and production namespaces:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: default-deny-ingress namespace: stagingspec: podSelector: {} policyTypes: - IngressThis blocks all incoming traffic to every pod in the namespace. Nothing gets in unless another policy explicitly allows it.
Allow traffic from Traefik
Section titled “Allow traffic from Traefik”Traefik runs in the traefik namespace and needs to reach your application pods. Allow it:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-traefik-ingress namespace: stagingspec: podSelector: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: traefikApply the same pair of policies (default-deny + allow-traefik) to the production namespace. The result: pods in staging cannot reach pods in production, and vice versa. Only Traefik can route external traffic in. See the Kubernetes network policy documentation for more complex rules.
Daily Workflow
Section titled “Daily Workflow”1. Develop locally
Section titled “1. Develop locally”Start your application in the dev namespace. Use port-forwarding to access it:
kubectl port-forward svc/my-app 3000:80 -n dev --address 127.0.0.1Iterate on code and manifests. Apply changes directly:
kubectl apply -k apps/overlays/devThis is the fast loop. No git commits needed. No Flux reconciliation. You apply, you test, you fix, you apply again.
2. Promote to local staging
Section titled “2. Promote to local staging”When the feature works in dev, deploy it to local staging:
kubectl apply -k apps/overlays/stagingThis uses the same manifests with staging-specific patches: tighter resource limits, network policies, a staging domain. If something breaks here, it would have broken in remote staging too. Fix it now while the feedback loop is short.
3. Push to remote staging
Section titled “3. Push to remote staging”Commit your changes and push:
git add apps/git commit -m "Add widget service"git push origin mainFlux watches the repository and reconciles apps/overlays/staging on the remote cluster within a few minutes. Check the rollout:
kubectl --context remote get pods -n stagingLet it soak for about a week. Monitor logs, watch for memory leaks, verify that restarts stay at zero.
4. Promote to production
Section titled “4. Promote to production”After the soak period, update the production overlay. This usually means changing the image tag:
# Edit apps/overlays/production/kustomization.yaml to reference the tested imagegit add apps/overlays/production/git commit -m "Promote widget service to production"git push origin mainFlux applies the change to the production namespace. The same image that ran in staging for a week now runs in production.
Secrets Across Environments
Section titled “Secrets Across Environments”Each environment needs its own secrets (database passwords, API keys, TLS certificates). Use SOPS with age encryption to store them safely in Git. Each cluster gets its own age key. SOPS supports multiple recipients, so you can encrypt a secret to both the local and remote cluster keys at once.
The workflow stays the same across environments: commit encrypted secrets to Git, Flux decrypts them at reconciliation time, plaintext never enters the repository.
What You Have Now
Section titled “What You Have Now”After setting up this workflow:
- Four environments with clear boundaries and a defined promotion path
- Namespace isolation through network policies and resource quotas
- A repository structure where every manifest has a clear home
- A daily workflow that moves from fast local iteration to production deployment through git commits
The k0s installation guide covers setting up the local cluster. Kubernetes Fundamentals explains the core objects (Pods, Deployments, Services) that your manifests define. Flux CD handles the GitOps reconciliation. From here, you add applications to apps/base/, write overlays for each environment, and let the promotion flow carry them to production.