Installing Traefik
Installing Traefik
Section titled “Installing Traefik”- Trial install Traefik with Helm (install, test dashboard, uninstall)
- Create infrastructure/base/traefik/ Flux manifests (helmrepository, helmrelease, kustomization)
- Update infrastructure/base/kustomization.yaml to include both monitoring and traefik
- Update clusters/local/infrastructure.yaml path to ./infrastructure/base
- Generate mkcert certs and create TLS secret (
mise run tls:mkcert-setup) - Add tlsStore default certificate config to the local HelmRelease values
- Create IngressRoutes for Grafana and the Traefik dashboard
- Commit, push, and verify Flux reconciles Traefik
- Add DNS entries and verify HTTPS works locally
- Document the remote/Let’s Encrypt overlay for later (ACME resolver, HTTP→HTTPS redirect, persistence)
Walkthrough
Section titled “Walkthrough”Trial install with Traefik
Section titled “Trial install with Traefik”Helm Install
Section titled “Helm Install”Install (no -n flag — installs to default namespace for trial):
Install using the Traefik Helm chart:
helm repo add traefik https://traefik.github.io/chartshelm repo updatehelm install traefik traefik/traefik --version 39.0.6# helm uninstall traefikFinding available ports and entrypoints
Section titled “Finding available ports and entrypoints”The chart defines entrypoints under the ports: key in its values. To see what the
running release is configured with:
# All computed values for the release (includes chart defaults + your overrides)helm get values traefik -a -o yaml | grep -A 10 "^ports:"
# Just your overrides (what you changed from defaults)helm get values traefik -o yamlTo see the full default values for the chart (before any install):
helm show values traefik/traefik | grep -A 10 "^ports:"Each key under ports: is a Traefik entrypoint. The important fields are:
port— container-side port (internal to the pod)exposedPort— Service-side port (what external traffic hits)expose.default— whether the port appears on the Service (true/false)protocol— TCP or UDP
Investigate the Dashboard
Section titled “Investigate the Dashboard”Check the Service:
kubectl get svc -l app.kubernetes.io/name=traefikNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEtraefik LoadBalancer 10.98.85.16 192.168.10.241 80:30521/TCP,443:30733/TCP 55mThe chart exposes three entrypoints by default:
| Entrypoint | Container port | Service port | Exposed | Purpose |
|---|---|---|---|---|
web | 8000 | 80 | yes | HTTP |
websecure | 8443 | 443 | yes | HTTPS/TLS |
traefik | 8080 | — | no | Dashboard/API |
The dashboard port (8080) is intentionally not on the Service and the dashboard API is disabled by default 1. Enable it for the trial:
helm upgrade traefik traefik/traefik --set api.dashboard=true --set api.insecure=trueapi.insecure=true allows unauthenticated access on the traefik entrypoint (8080).
Fine for a local trial, not for production.
Port-forward to access it:
kubectl port-forward $(kubectl get pods -l app.kubernetes.io/name=traefik -o name) 8080:8080Then open http://127.0.0.1:8080/dashboard/ (trailing slash required).
Create Flux manifests
Section titled “Create Flux manifests”Uninstall the trial first (Flux will recreate it from Git):
helm uninstall traefikGenerate the Flux source and release manifests:
mkdir -p infrastructure/base/traefik/
# HelmRepository; tells Flux where to fetch chart tarballs from.# --interval=1h means Flux checks the repo index for new chart versions every hour.flux create source helm traefik \ --url=https://traefik.github.io/charts \ --interval=1h \ --export > infrastructure/base/traefik/helmrepository.yaml
# HelmRelease; tells Flux which chart to install and how to configure it.# --interval=5m means Flux checks every 5 minutes whether the running release# matches the desired state in git (values, version). If it drifts, Flux re-applies.flux create helmrelease traefik \ --source=HelmRepository/traefik \ --chart=traefik \ --chart-version=39.0.6 \ --release-name=traefik \ --target-namespace=traefik \ --interval=5m \ --export > infrastructure/base/traefik/helmrelease.yamlThen edit the helmrelease to add values (dashboard, ports, TLS) and create the kustomization:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - helmrepository.yaml - helmrelease.yamlDashboard security
Section titled “Dashboard security”The dashboard is only exposed on the local cluster, via the local overlay
(infrastructure/overlays/local/traefik-patch.yaml). The base and remote overlay
do not set api.insecure, so the dashboard is off by default on remote.
Even on local, the dashboard is not on the Service (expose.default: false on the
traefik entrypoint). It is only reachable via kubectl port-forward, which requires
host access and a valid kubeconfig. An attacker would need shell access to the
machine and your kubeconfig to reach it.
# Local only — requires kubeconfig and host accesskubectl port-forward -n traefik \ $(kubectl get pods -n traefik -l app.kubernetes.io/name=traefik -o name) 8080:8080| Cluster | api.dashboard | api.insecure | Reachable via |
|---|---|---|---|
| local | true | true | kubectl port-forward |
| remote | false (default) | false (default) | not exposed |
mkcert setup (local TLS)
Section titled “mkcert setup (local TLS)”A mise task generates a locally-trusted wildcard certificate and loads it into the cluster as a TLS secret. The certificate covers multiple domain patterns so you can use whichever feels natural:
| Pattern | Example |
|---|---|
*.k8s.local | grafana.k8s.local |
*.k8s.lan | grafana.k8s.lan |
*.lan | grafana.lan |
localhost | https://localhost |
Run the task:
mise run tls:mkcert-setupThis is idempotent — safe to run repeatedly. It:
- Installs the mkcert CA into the system trust store (once)
- Generates the certificate at
~/.local/share/mkcert-k8s/(once) - Creates or updates the
mkcert-wildcardTLS secret in thetraefiknamespace
The local overlay references this secret via tlsStore.default.defaultCertificate.
Overlays
Section titled “Overlays”infrastructure/ base/ monitoring/ ingressroute-grafana.yaml # Grafana IngressRoute (shared, both clusters) ... traefik/ helmrepository.yaml # chart source (shared) helmrelease.yaml # common: chart version, namespace, ports kustomization.yaml overlays/ local/ kustomization.yaml # refs ../../base, applies patch + dashboard route traefik-patch.yaml # api.insecure=true, mkcert tlsStore ingressroute-dashboard.yaml # dashboard IngressRoute (local only) remote/ kustomization.yaml # refs ../../base, applies patch + smoke test traefik-patch.yaml # ACME resolver, HTTP→HTTPS redirect, persistence tls-smoke-test.yaml # nginx + middleware + IngressRoute (plain text) tls-smoke-test-secret.yaml # htpasswd credentials (SOPS-encrypted)Local overlay
Section titled “Local overlay”apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - ../../base - ingressroute-dashboard.yamlpatches: - path: traefik-patch.yamlapiVersion: helm.toolkit.fluxcd.io/v2kind: HelmReleasemetadata: name: traefik namespace: flux-systemspec: values: api: dashboard: true insecure: true tlsStore: default: defaultCertificate: secretName: mkcert-wildcardRemote overlay
Section titled “Remote overlay”apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - ../../basepatches: - path: traefik-patch.yamlapiVersion: helm.toolkit.fluxcd.io/v2kind: HelmReleasemetadata: name: traefik namespace: flux-systemspec: values: ports: web: redirections: entryPoint: to: websecure scheme: https permanent: true persistence: enabled: true size: 128Mi accessMode: ReadWriteOnce certificatesResolvers: letsencrypt: acme: email: you@example.com storage: /data/acme.json httpChallenge: entryPoint: web deployment: initContainers: - name: volume-permissions image: busybox:latest command: ["sh", "-c", "touch /data/acme.json; chmod -v 600 /data/acme.json"] volumeMounts: - mountPath: /data name: data podSecurityContext: fsGroup: 65532 fsGroupChangePolicy: "OnRootMismatch"Wire the overlays
Section titled “Wire the overlays”Each cluster’s Flux Kustomization points at its overlay, not at base:
spec: path: ./infrastructure/overlays/local
# clusters/remote/infrastructure.yamlspec: path: ./infrastructure/overlays/remoteIngressRoutes
Section titled “IngressRoutes”IngressRoutes are Traefik’s CRD for routing. Each one maps a hostname to a backend
Service. tls: {} uses the default certificate from the tlsStore (the mkcert
wildcard on local, Let’s Encrypt on remote).
Grafana (shared — in base)
Section titled “Grafana (shared — in base)”Lives in infrastructure/base/monitoring/ingressroute-grafana.yaml because both
clusters expose Grafana. The hostname can be patched per overlay if needed later.
apiVersion: traefik.io/v1alpha1kind: IngressRoutemetadata: name: grafana namespace: monitoringspec: entryPoints: - websecure routes: - match: Host(`grafana.k8s.local`) kind: Rule services: - name: kube-prometheus-grafana port: 80 tls: {}Traefik dashboard (local only — in overlay)
Section titled “Traefik dashboard (local only — in overlay)”Lives in infrastructure/overlays/local/ingressroute-dashboard.yaml because the
dashboard is only exposed on the local cluster.
apiVersion: traefik.io/v1alpha1kind: IngressRoutemetadata: name: traefik-dashboard namespace: traefikspec: entryPoints: - websecure routes: - match: Host(`traefik.k8s.local`) kind: Rule services: - name: api@internal kind: TraefikService tls: {}api@internal is a special Traefik service that routes to its own dashboard API.
After Flux deploys Traefik, find the LoadBalancer IP:
kubectl get svc -n traefik traefik -o jsonpath='{.status.loadBalancer.ingress[0].ip}'Add entries to /etc/hosts (or local DNS):
<TRAEFIK_LB_IP> grafana.k8s.local traefik.k8s.localThen open:
https://grafana.k8s.local— Grafana (default login: admin / seegrafana-adminsecret)https://traefik.k8s.local— Traefik dashboard
Localhost-only access on the local cluster
Section titled “Localhost-only access on the local cluster”The practical approach: set Traefik’s Service to ClusterIP so no external IP is
assigned, then use kubectl port-forward to tunnel to localhost. The local overlay
does this:
# infrastructure/overlays/local/traefik-patch.yaml (excerpt)service: type: ClusterIPClusterIP keeps Traefik’s ports on the Service (the chart requires at least one)
but only reachable inside the cluster — no MetalLB IP, no NodePort, nothing on the
network.
A mise task wraps kubectl port-forward so you don’t have to remember the commands:
mise run traefikThis reconciles Flux, waits for the pod, and forwards HTTP (80), HTTPS (443), and
the dashboard (8080) to localhost. Point *.k8s.local at 127.0.0.1 via
/etc/hosts or dnsmasq and everything works.
Under the hood it runs:
kubectl port-forward -n traefik svc/traefik 80:80 443:443 &kubectl port-forward -n traefik <traefik-pod> 8080:8080 &Port-forward tunnels through the Kubernetes API server directly to the pod, independent of the cluster’s network stack. It works on every Kubernetes distribution, every CNI plugin, every load balancer. The only downside is that it’s imperative — it dies when you close the terminal.
Which load balancer am I using?
Section titled “Which load balancer am I using?”The load balancer implementation is not declared in your Traefik manifests — it’s a separate component installed on the cluster. To find out which one:
# Check for MetalLBkubectl get pods -n metallb-system
# Check for k3s ServiceLB (svclb pods in kube-system)kubectl get pods -n kube-system | grep svclb
# Check for a cloud controller (GKE, EKS, AKS)kubectl get pods -n kube-system | grep cloud-controllerThis cluster uses MetalLB in L2 mode with an IP pool of 192.168.10.240-250:
kubectl get ipaddresspool -n metallb-system -o yamlAlternative: Caddy as a localhost reverse proxy
Section titled “Alternative: Caddy as a localhost reverse proxy”If you want declarative, centralized, per-route localhost control, one option is
to run a lightweight Caddy instance on the host (outside Kubernetes) that listens
on 127.0.0.1 and reverse-proxies to the MetalLB IP:
# /etc/caddy/Caddyfile (runs on host, not in cluster)grafana.k8s.local { reverse_proxy 192.168.10.241:443 { transport http { tls_insecure_skip_verify } } bind 127.0.0.1}This gives you declarative, centralized, per-route localhost control, but it’s a second reverse proxy outside the cluster. Whether that complexity is worth it depends on the threat model.
Commit and push
Section titled “Commit and push”git add infrastructure/ clusters/git commit -m "Add Traefik via Flux with IngressRoutes for Grafana and dashboard"git pushReconcile
Section titled “Reconcile”flux reconcile source git flux-systemflux reconcile kustomization infrastructureVerify:
flux get helmreleases -Akubectl get pods -n traefikkubectl get ingressroute -ATLS smoke test (remote only)
Section titled “TLS smoke test (remote only)”A minimal nginx deployment behind basic auth to verify Let’s Encrypt is working on the remote cluster. Split into two files to keep secrets separate:
infrastructure/overlays/remote/tls-smoke-test.yaml— all non-sensitive resourcesinfrastructure/overlays/remote/tls-smoke-test-secret.yaml— htpasswd credentials (SOPS-encrypt before committing)
The Secret is in its own file so SOPS only encrypts the credentials, not the entire
deployment. This follows the same pattern as the Grafana admin secret in
infrastructure/base/monitoring/grafana-secret.yaml.
It creates:
| Resource | File | Purpose |
|---|---|---|
Namespace smoke-test | tls-smoke-test.yaml | Isolates the test from real workloads |
| ConfigMap | tls-smoke-test.yaml | Inline HTML page (“TLS is working”) |
| Deployment | tls-smoke-test.yaml | caddy:alpine serving the HTML |
| Service | tls-smoke-test.yaml | Exposes nginx internally on port 80 |
| Middleware | tls-smoke-test.yaml | Traefik basicAuth — checks credentials before forwarding |
| IngressRoute | tls-smoke-test.yaml | Routes smoke.example.com → nginx, with auth + certResolver: letsencrypt |
| Secret | tls-smoke-test-secret.yaml | htpasswd credentials (SOPS-encrypted) |
Before deploying to the remote cluster:
- Replace
smoke.example.comin the IngressRoute with your actual domain - Regenerate the password hash:
htpasswd -nb admin yourpassword - Update the
usersfield in the Secret - Encrypt the secret (see Secrets Management):
sops --encrypt --in-place infrastructure/overlays/remote/tls-smoke-test-secret.yaml - Point DNS for that domain to the remote Traefik LoadBalancer IP
Verify after deployment:
kubectl get pods -n smoke-testcurl -u admin:smoketest https://smoke.example.comIf you see “TLS is working” and the certificate is valid, Let’s Encrypt is configured
correctly. Once verified, remove the smoke test by deleting tls-smoke-test.yaml
from the remote overlay’s kustomization.yaml and pushing — Flux will clean it up
(prune: true).
Resources
Section titled “Resources”- https://github.com/traefik/traefik-helm-chart/blob/master/EXAMPLES.md#use-traefik-native-lets-encrypt-integration-without-cert-manager
- https://artifacthub.io/packages/helm/traefik/traefik
- https://doc.traefik.io/traefik/reference/install-configuration/api-dashboard/