Skip to content

Recovering ~/.kube

Kubeconfig files are the only credentials your tools use to reach the cluster. Lose them and kubectl, helm, flux, and k9s all stop working. The cluster itself is unaffected — the API server and certificates are still there. You just need new credential files.

Three situations lead here, and each requires different steps:

ScenarioCluster stateWhat you need
A — Wiped ~/.kubek0s running on this machineRe-export the kubeconfig
B — New machine, existing clusterk0s running on another machineCopy or generate a kubeconfig, set up local tooling
C — Fresh startNo cluster anywhereInstall k0s, bootstrap everything

Scenario A: Wiped ~/.kube on the same machine

Section titled “Scenario A: Wiped ~/.kube on the same machine”

The cluster is running locally. You deleted ~/.kube/ or the config files inside it. k0s still has its CA and certs — you just need a fresh export.

  1. Re-export the kubeconfig

    Terminal window
    mkdir -p ~/.kube
    sudo k0s kubeconfig admin > ~/.kube/k0s-admin.conf
    chmod 600 ~/.kube/k0s-admin.conf

    The project’s mise.toml already sets KUBECONFIG=/var/home/ryan/.kube/k0s-admin.conf, so every tool picks this up when you cd into the project. The kb binary also reads this value from mise.toml directly, so it works even outside a mise-activated shell.

  2. Verify access

    Terminal window
    kubectl get nodes

    If the node shows Ready, you are done. Flux is already running in the cluster, the SOPS age key is already there, TLS certs and /etc/hosts entries are on disk from before. Nothing else to do.

That’s it. Skip to Generating user kubeconfigs if you want RBAC-scoped credentials instead of admin.

Scenario B: New machine, existing cluster elsewhere

Section titled “Scenario B: New machine, existing cluster elsewhere”

You rsync’d or cloned the project to a new workstation. k0s is running on the original machine (reachable over the network). The cluster already has Flux, the SOPS age key, and all workloads — you do not need to bootstrap anything on the cluster side.

What you need on the new machine: a kubeconfig pointing at the remote API server, local tooling, TLS for the browser, and DNS entries.

  1. Get a kubeconfig

    You have two options.

    Option A — Copy the admin config from the cluster machine:

    Terminal window
    mkdir -p ~/.kube
    scp cluster-host:~/.kube/k0s-admin.conf ~/.kube/k0s-admin.conf
    chmod 600 ~/.kube/k0s-admin.conf

    The copied config has server: https://localhost:6443 — edit it to point at the cluster machine’s IP or hostname:

    Terminal window
    sed -i 's|https://localhost:6443|https://cluster-host:6443|' ~/.kube/k0s-admin.conf

    Option B — Generate a user kubeconfig on the cluster machine:

    SSH into the cluster machine and run:

    Terminal window
    sudo k0s kubeconfig create --groups team-admins ryan

    This prints a kubeconfig to stdout. Save it locally as ~/.kube/k0s-ryan.conf and update mise.toml to point at it. The server address in the generated config should already reflect the cluster machine’s hostname.

  2. Verify access

    Terminal window
    kubectl get nodes

    If this fails, check that port 6443 is open on the cluster machine’s firewall and reachable from your network.

  3. Install tools

    The project’s mise.toml declares all required tools (kubectl, flux, helm, k9s, mkcert, etc.). Trust the config and install:

    Terminal window
    mise trust all
    mise install
  4. Build the CLI

    Terminal window
    cd .mise/cli && bun run build.ts && cd ../..
  5. Set up local TLS

    mkcert creates a local CA on your machine so browsers trust *.k8s.local certificates. This is per-machine — the cluster’s TLS secret already exists, but your browser needs the local CA root:

    Terminal window
    kb tls mkcert-setup

    See Local TLS with mkcert for details.

  6. Add DNS entries

    Terminal window
    grep -q k8s.local /etc/hosts || sudo tee -a /etc/hosts <<< '127.0.0.1 docs.k8s.local grafana.k8s.local traefik.k8s.local'
  7. Start the dev environment

    Terminal window
    kb dev start

    This builds container images locally, applies the dev overlay, and forwards ports. Flux is already reconciling on the cluster — kb dev start temporarily suspends it, applies the dev overlay, and resumes on exit.

    For live editing with Astro HMR:

    Terminal window
    kb dev start --dev

What you do NOT need to do: bootstrap Flux (already running), inject the SOPS age key (already in the cluster), or open firewall ports on the cluster (already configured). All cluster-side state is intact.

Scenario C: Fresh start — no cluster anywhere

Section titled “Scenario C: Fresh start — no cluster anywhere”

A blank machine with nothing set up. No k0s, no cluster, no kubeconfig. This is the full sequence.

  1. Install k0s

    Terminal window
    curl --proto '=https' --tlsv1.2 -sSf https://get.k0s.sh | sudo sh

    See Installing k0s for manual install, SELinux configuration, and multi-node setups.

  2. Open firewall ports

    Terminal window
    sudo firewall-cmd --permanent --add-port=6443/tcp
    sudo firewall-cmd --permanent --add-port=10250/tcp
    sudo firewall-cmd --permanent --zone=trusted --add-interface=kube-bridge
    sudo firewall-cmd --permanent --add-masquerade
    sudo firewall-cmd --reload
  3. Create the cluster

    Terminal window
    sudo k0s install controller --enable-worker --no-taints
    sudo k0s start

    --enable-worker runs a kubelet on the same node. --no-taints lets workloads schedule there.

  4. Export the kubeconfig

    Terminal window
    mkdir -p ~/.kube
    sudo k0s kubeconfig admin > ~/.kube/k0s-admin.conf
    chmod 600 ~/.kube/k0s-admin.conf
  5. Verify the cluster

    Terminal window
    kubectl get nodes

    Wait for the node to reach Ready (usually under two minutes).

  6. Install tools

    Terminal window
    mise trust all
    mise install
  7. Build the CLI

    Terminal window
    cd .mise/cli && bun run build.ts && cd ../..
  8. Set up local TLS

    Terminal window
    kb tls mkcert-setup
  9. Add DNS entries

    Terminal window
    sudo tee -a /etc/hosts <<< '127.0.0.1 docs.k8s.local grafana.k8s.local traefik.k8s.local'
  10. Inject the SOPS age key

    Flux needs the age private key to decrypt secrets. Retrieve it from your password manager (or KeePass) and inject it:

    Terminal window
    echo "AGE-SECRET-KEY-..." | kubectl create secret generic sops-age \
    --namespace=flux-system \
    --from-file=age.agekey=/dev/stdin

    If you are creating a brand-new cluster (not restoring an existing one), generate a fresh keypair instead:

    Terminal window
    age-keygen -o /tmp/age.key
    cat /tmp/age.key | kubectl create secret generic sops-age \
    --namespace=flux-system \
    --from-file=age.agekey=/dev/stdin

    Add the public key to .sops.yaml under a new creation_rules entry, then re-encrypt any secrets. See Managing Secrets with SOPS for the full workflow. Store the private key in your password manager and delete /tmp/age.key.

  11. Bootstrap Flux

    Terminal window
    mise run flux-bootstrap

    This installs the Flux controllers, commits component manifests to clusters/vale/flux-system/, and starts reconciling the cluster toward the declared Git state. It needs a GitHub token — the command reads it from the encrypted .env.json. See Bootstrapping Flux.

  12. Start the dev environment

    Terminal window
    kb dev start

    Or with live editing:

    Terminal window
    kb dev start --dev

    Once running, open https://docs.k8s.local.

Which steps apply to which scenario:

StepA (wiped config)B (new machine)C (fresh start)
Install k0s--yes
Firewall ports--yes
Create cluster--yes
Export/copy kubeconfigyesyesyes
Install tools (mise)-yesyes
Build CLI-yesyes
Local TLS (mkcert)-yesyes
DNS entries (/etc/hosts)-yesyes
Inject SOPS age key--yes
Bootstrap Flux--yes
Start dev environmentoptionalyesyes

The admin config gives full cluster access. For RBAC-scoped permissions, generate user certificates on the machine running k0s:

Terminal window
sudo uv run scripts/create-users/generate-kubeconfigs.py generate
cp /tmp/kubeconfigs/ryan.kubeconfig ~/.kube/k0s-ryan.conf
chmod 600 ~/.kube/k0s-ryan.conf

users.yaml defines users, group memberships, and certificate expiry durations. To switch from admin to user credentials, update mise.toml:

[env]
KUBECONFIG = "~/.kube/k0s-ryan.conf"

If you also use managed clusters, regenerate configs from the provider:

Terminal window
# DigitalOcean
kb infra do kubeconfig --path ~/.kube/do-cluster.conf
# GKE
kb infra gke kubeconfig --path ~/.kube/gke-cluster.conf

Inspect all kubeconfig files — the script parses certificates and shows username, groups, server, and expiry:

Terminal window
uv run scripts/create-users/generate-kubeconfigs.py inspect ~/.kube/*.conf

Check cluster health:

Terminal window
kubectl get nodes
kubectl get pods -A
flux get all