Skip to content

Local Dev Workflow

This tutorial walks through the full local dev loop: suspend Flux, apply manifests, port-forward, iterate, and clean up. By the end you will have services running at *.k8s.local on localhost and understand each step.

  • A running k0s cluster with Flux bootstrapped (mise run flux-bootstrap)
  • mkcert TLS set up (kb tls mkcert-setup)
  • /etc/hosts entries pointing your domains to 127.0.0.1:
127.0.0.1 docs.k8s.local
127.0.0.1 grafana.k8s.local
127.0.0.1 traefik.k8s.local

Flux watches this git repo and reconciles every minute. If you apply local changes without suspending Flux first, it will revert them on the next reconciliation cycle.

Terminal window
flux suspend kustomization --all

This pauses every Flux Kustomization. The cluster keeps running — pods, services, and ingress all stay up. Flux just stops comparing the live state to git.

Forgetting to resume Flux leaves the cluster in a state where git pushes have no effect. Wrap your session in a trap so Flux resumes no matter how you exit:

Terminal window
flux suspend kustomization --all
trap 'flux resume kustomization --all' EXIT
# ... do your work ...
# Flux resumes automatically when this shell exits (Ctrl+D, exit, or Ctrl+C)

The EXIT trap fires on normal exit, Ctrl+C, and script errors. The only case it misses is kill -9 — if that happens, resume manually:

Terminal window
flux resume kustomization --all

The project’s mise run apply task does this automatically. It suspends Flux, applies your manifests, then resumes Flux when the task exits. If you prefer not to manage the trap yourself:

Terminal window
mise run apply -- apps/overlays/dev

With Flux suspended, apply the dev overlay:

Terminal window
kubectl apply -k apps/overlays/dev

-k tells kubectl to run Kustomize on the directory before applying. This builds the full manifest set from apps/base/docs plus the dev-specific IngressRoute, then sends it to the cluster.

Run it twice if you see errors about missing namespaces or CRDs on the first pass. The first apply creates the namespace; the second apply creates resources that depend on it. This is normal — Kustomize does not order resources by dependency the way Flux does.

Terminal window
kubectl apply -k apps/overlays/dev
kubectl apply -k apps/overlays/dev

Verify the resources exist:

Terminal window
kubectl get all -n docs

The Traefik Service is ClusterIP — it has no external IP. Port-forwarding maps its cluster-internal ports to your localhost:

Terminal window
kubectl port-forward svc/traefik -n traefik 80:80 443:443

This runs in the foreground. Open a new terminal or background it with &. With port-forwarding active and /etc/hosts configured, open https://docs.k8s.local in your browser.

The project includes a mise task that also forwards the Traefik dashboard on port 8080:

Terminal window
mise run traefik

This builds the docs image first (dependency), waits for the Traefik pod to be ready, then starts all three port-forwards in parallel with a cleanup trap.

After rebuilding an image, Kubernetes does not automatically pull it again — the pod is already running the old image. A rolling restart tells Kubernetes to terminate the existing pods and create new ones, which pull the updated image:

Terminal window
kubectl rollout restart deployment/docs -n docs

deployment/docs refers to a Kubernetes Deployment resource, not a directory. There is no deploy/ folder in the repo. A Deployment is a Kubernetes object that declares how many replicas of a pod to run and what container image they use. It lives in the cluster, defined by the manifest at apps/base/docs/deployment.yaml.

The word “deployment” in kubectl rollout restart deployment/docs is the resource kind — the same way pod/my-pod or svc/traefik name a resource by kind and name. The full path is:

deployment/docs → kind: Deployment, metadata.name: docs

kubectl talks to the Kubernetes API, not the filesystem. The Deployment object exists in the cluster’s etcd datastore regardless of what directories exist on disk.

Build the image, import it into k0s, and restart:

Terminal window
# Build the image
podman build -t cluster-docs:latest --target production docs/
# Remove the old image (containerd won't overwrite but k0s reports success)
sudo k0s ctr images rm docker.io/library/cluster-docs:latest localhost/cluster-docs:latest
# Move the image from podman to k0s
podman save cluster-docs:latest | sudo k0s ctr images import -
# Tag the image with the expected name
# podman save exports as localhost/cluster-docs:latest, but the Deployment
# manifest references cluster-docs:latest which containerd resolves to
# docker.io/library/cluster-docs:latest
sudo k0s ctr images tag localhost/cluster-docs:latest docker.io/library/cluster-docs:latest
# Rollout the new image
kubectl rollout restart deployment/docs -n docs

containerd does not overwrite tags. If you skip the k0s ctr images rm step, containerd keeps the old image digest even though the import succeeds. Every new pod will launch the stale image. You must delete the old tag first, then import. This is easy to miss because the import reports success either way.

containerd also needs the docker.io tag. Podman exports images under localhost/, but when the Deployment manifest says image: cluster-docs:latest Kubernetes resolves that to docker.io/library/cluster-docs:latest. Without the retag step, containerd has the image but cannot find it under the name the kubelet asks for.

Verify with:

Terminal window
# Compare image IDs — these should match after a correct import
podman inspect cluster-docs:latest --format '{{.Id}}'
kubectl get pod -n docs -o jsonpath='{.items[0].status.containerStatuses[0].imageID}'

Or use the mise task that handles the build and import:

Terminal window
mise run docs-build
kubectl rollout restart deployment/docs -n docs

Watch the rollout complete:

Terminal window
kubectl rollout status deployment/docs -n docs

A complete local dev session:

Terminal window
# Suspend Flux and auto-resume on exit
flux suspend kustomization --all
trap 'flux resume kustomization --all' EXIT
# Apply manifests (twice for ordering)
kubectl apply -k apps/overlays/dev
kubectl apply -k apps/overlays/dev
# Port-forward in the background
kubectl port-forward svc/traefik -n traefik 80:80 443:443 &
# Open https://docs.k8s.local in your browser
# ... edit docs site code ...
# Rebuild: delete old image, import new one, retag, restart
podman build -t cluster-docs:latest --target production docs/
sudo k0s ctr images rm docker.io/library/cluster-docs:latest localhost/cluster-docs:latest
podman save cluster-docs:latest | sudo k0s ctr images import -
sudo k0s ctr images tag localhost/cluster-docs:latest docker.io/library/cluster-docs:latest
kubectl rollout restart deployment/docs -n docs
# When done, exit the shell — Flux resumes automatically

Or use the single dev task that does everything — suspend, apply, port-forward — and resumes Flux when you Ctrl+C:

Terminal window
mise run dev

This suspends all Flux Kustomizations, applies the dev overlay (twice for ordering), waits for the Traefik pod, then port-forwards HTTP, HTTPS, and the dashboard to localhost. When you press Ctrl+C or the shell exits, Flux resumes automatically.

The individual tasks still work if you need more control:

Terminal window
mise run apply -- apps/overlays/dev # suspends Flux, applies, resumes on exit
mise run traefik # port-forward with auto-cleanup

On a single-node k0s cluster the host filesystem is the node filesystem, so a hostPath volume can mount docs/ directly into the pod. Astro’s built-in file watcher handles HMR — no tar-pipe sync needed.

This breaks on multi-node clusters (kind with workers, cloud providers, anything with more than one node). The scheduler can place the pod on any node, and only the node that has the host directory can serve the mount. You can pin the pod to a specific node with nodeSelector, but then you need extraMounts in the kind config (to get the host directory into the kind node’s Docker container) and a different hostPath value per cluster type (/var/home/... on k0s vs /mnt/docs on kind). Switching clusters means editing the patch.

Bind mounts are all-or-nothing: they work perfectly on single-node local clusters and not at all on multi-node clusters without per-node configuration. The rebuild-and-restart cycle in this tutorial works everywhere.

Check that everything is healthy:

Terminal window
# Pods running
kubectl get pods -n docs
# IngressRoutes configured
kubectl get ingressroutes -A
# Flux status (should show "suspended: true" during dev)
flux get kustomizations

In k9s, press Shift+L to see IngressRoutes, Shift+D for Deployments, or Shift+K for Flux Kustomizations.