Skip to content

Installing Traefik

  1. Trial install Traefik with Helm (install, test dashboard, uninstall)
  2. Create infrastructure/base/traefik/ Flux manifests (helmrepository, helmrelease, kustomization)
  3. Update infrastructure/base/kustomization.yaml to include both monitoring and traefik
  4. Update clusters/local/infrastructure.yaml path to ./infrastructure/base
  5. Generate mkcert certs and create TLS secret (mise run tls:mkcert-setup)
  6. Add tlsStore default certificate config to the local HelmRelease values
  7. Create IngressRoutes for Grafana and the Traefik dashboard
  8. Commit, push, and verify Flux reconciles Traefik
  9. Add DNS entries and verify HTTPS works locally
  10. Document the remote/Let’s Encrypt overlay for later (ACME resolver, HTTP→HTTPS redirect, persistence)

Install (no -n flag — installs to default namespace for trial):

Install using the Traefik Helm chart:

Terminal window
helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install traefik traefik/traefik --version 39.0.6
# helm uninstall traefik

The chart defines entrypoints under the ports: key in its values. To see what the running release is configured with:

Terminal window
# All computed values for the release (includes chart defaults + your overrides)
helm get values traefik -a -o yaml | grep -A 10 "^ports:"
# Just your overrides (what you changed from defaults)
helm get values traefik -o yaml

To see the full default values for the chart (before any install):

Terminal window
helm show values traefik/traefik | grep -A 10 "^ports:"

Each key under ports: is a Traefik entrypoint. The important fields are:

  • port — container-side port (internal to the pod)
  • exposedPort — Service-side port (what external traffic hits)
  • expose.default — whether the port appears on the Service (true/false)
  • protocol — TCP or UDP

Check the Service:

Terminal window
kubectl get svc -l app.kubernetes.io/name=traefik
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.98.85.16 192.168.10.241 80:30521/TCP,443:30733/TCP 55m

The chart exposes three entrypoints by default:

EntrypointContainer portService portExposedPurpose
web800080yesHTTP
websecure8443443yesHTTPS/TLS
traefik8080noDashboard/API

The dashboard port (8080) is intentionally not on the Service and the dashboard API is disabled by default 1. Enable it for the trial:

Terminal window
helm upgrade traefik traefik/traefik --set api.dashboard=true --set api.insecure=true

api.insecure=true allows unauthenticated access on the traefik entrypoint (8080). Fine for a local trial, not for production.

Port-forward to access it:

Terminal window
kubectl port-forward $(kubectl get pods -l app.kubernetes.io/name=traefik -o name) 8080:8080

Then open http://127.0.0.1:8080/dashboard/ (trailing slash required).

Uninstall the trial first (Flux will recreate it from Git):

Terminal window
helm uninstall traefik

Generate the Flux source and release manifests:

Terminal window
mkdir -p infrastructure/base/traefik/
# HelmRepository; tells Flux where to fetch chart tarballs from.
# --interval=1h means Flux checks the repo index for new chart versions every hour.
flux create source helm traefik \
--url=https://traefik.github.io/charts \
--interval=1h \
--export > infrastructure/base/traefik/helmrepository.yaml
# HelmRelease; tells Flux which chart to install and how to configure it.
# --interval=5m means Flux checks every 5 minutes whether the running release
# matches the desired state in git (values, version). If it drifts, Flux re-applies.
flux create helmrelease traefik \
--source=HelmRepository/traefik \
--chart=traefik \
--chart-version=39.0.6 \
--release-name=traefik \
--target-namespace=traefik \
--interval=5m \
--export > infrastructure/base/traefik/helmrelease.yaml

Then edit the helmrelease to add values (dashboard, ports, TLS) and create the kustomization:

infrastructure/base/traefik/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- helmrepository.yaml
- helmrelease.yaml

The dashboard is only exposed on the local cluster, via the local overlay (infrastructure/overlays/local/traefik-patch.yaml). The base and remote overlay do not set api.insecure, so the dashboard is off by default on remote.

Even on local, the dashboard is not on the Service (expose.default: false on the traefik entrypoint). It is only reachable via kubectl port-forward, which requires host access and a valid kubeconfig. An attacker would need shell access to the machine and your kubeconfig to reach it.

8080/dashboard/
# Local only — requires kubeconfig and host access
kubectl port-forward -n traefik \
$(kubectl get pods -n traefik -l app.kubernetes.io/name=traefik -o name) 8080:8080
Clusterapi.dashboardapi.insecureReachable via
localtruetruekubectl port-forward
remotefalse (default)false (default)not exposed

A mise task generates a locally-trusted wildcard certificate and loads it into the cluster as a TLS secret. The certificate covers multiple domain patterns so you can use whichever feels natural:

PatternExample
*.k8s.localgrafana.k8s.local
*.k8s.langrafana.k8s.lan
*.langrafana.lan
localhosthttps://localhost

Run the task:

Terminal window
mise run tls:mkcert-setup

This is idempotent — safe to run repeatedly. It:

  1. Installs the mkcert CA into the system trust store (once)
  2. Generates the certificate at ~/.local/share/mkcert-k8s/ (once)
  3. Creates or updates the mkcert-wildcard TLS secret in the traefik namespace

The local overlay references this secret via tlsStore.default.defaultCertificate.

infrastructure/
base/
monitoring/
ingressroute-grafana.yaml # Grafana IngressRoute (shared, both clusters)
...
traefik/
helmrepository.yaml # chart source (shared)
helmrelease.yaml # common: chart version, namespace, ports
kustomization.yaml
overlays/
local/
kustomization.yaml # refs ../../base, applies patch + dashboard route
traefik-patch.yaml # api.insecure=true, mkcert tlsStore
ingressroute-dashboard.yaml # dashboard IngressRoute (local only)
remote/
kustomization.yaml # refs ../../base, applies patch + smoke test
traefik-patch.yaml # ACME resolver, HTTP→HTTPS redirect, persistence
tls-smoke-test.yaml # nginx + middleware + IngressRoute (plain text)
tls-smoke-test-secret.yaml # htpasswd credentials (SOPS-encrypted)
infrastructure/overlays/local/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
- ingressroute-dashboard.yaml
patches:
- path: traefik-patch.yaml
infrastructure/overlays/local/traefik-patch.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: traefik
namespace: flux-system
spec:
values:
api:
dashboard: true
insecure: true
tlsStore:
default:
defaultCertificate:
secretName: mkcert-wildcard
infrastructure/overlays/remote/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: traefik-patch.yaml
infrastructure/overlays/remote/traefik-patch.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: traefik
namespace: flux-system
spec:
values:
ports:
web:
redirections:
entryPoint:
to: websecure
scheme: https
permanent: true
persistence:
enabled: true
size: 128Mi
accessMode: ReadWriteOnce
certificatesResolvers:
letsencrypt:
acme:
email: you@example.com
storage: /data/acme.json
httpChallenge:
entryPoint: web
deployment:
initContainers:
- name: volume-permissions
image: busybox:latest
command: ["sh", "-c", "touch /data/acme.json; chmod -v 600 /data/acme.json"]
volumeMounts:
- mountPath: /data
name: data
podSecurityContext:
fsGroup: 65532
fsGroupChangePolicy: "OnRootMismatch"

Each cluster’s Flux Kustomization points at its overlay, not at base:

clusters/local/infrastructure.yaml
spec:
path: ./infrastructure/overlays/local
# clusters/remote/infrastructure.yaml
spec:
path: ./infrastructure/overlays/remote

IngressRoutes are Traefik’s CRD for routing. Each one maps a hostname to a backend Service. tls: {} uses the default certificate from the tlsStore (the mkcert wildcard on local, Let’s Encrypt on remote).

Lives in infrastructure/base/monitoring/ingressroute-grafana.yaml because both clusters expose Grafana. The hostname can be patched per overlay if needed later.

apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: grafana
namespace: monitoring
spec:
entryPoints:
- websecure
routes:
- match: Host(`grafana.k8s.local`)
kind: Rule
services:
- name: kube-prometheus-grafana
port: 80
tls: {}

Traefik dashboard (local only — in overlay)

Section titled “Traefik dashboard (local only — in overlay)”

Lives in infrastructure/overlays/local/ingressroute-dashboard.yaml because the dashboard is only exposed on the local cluster.

apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: traefik-dashboard
namespace: traefik
spec:
entryPoints:
- websecure
routes:
- match: Host(`traefik.k8s.local`)
kind: Rule
services:
- name: api@internal
kind: TraefikService
tls: {}

api@internal is a special Traefik service that routes to its own dashboard API.

After Flux deploys Traefik, find the LoadBalancer IP:

Terminal window
kubectl get svc -n traefik traefik -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

Add entries to /etc/hosts (or local DNS):

<TRAEFIK_LB_IP> grafana.k8s.local traefik.k8s.local

Then open:

  • https://grafana.k8s.local — Grafana (default login: admin / see grafana-admin secret)
  • https://traefik.k8s.local — Traefik dashboard

Localhost-only access on the local cluster

Section titled “Localhost-only access on the local cluster”

The practical approach: set Traefik’s Service to ClusterIP so no external IP is assigned, then use kubectl port-forward to tunnel to localhost. The local overlay does this:

# infrastructure/overlays/local/traefik-patch.yaml (excerpt)
service:
type: ClusterIP

ClusterIP keeps Traefik’s ports on the Service (the chart requires at least one) but only reachable inside the cluster — no MetalLB IP, no NodePort, nothing on the network.

A mise task wraps kubectl port-forward so you don’t have to remember the commands:

Terminal window
mise run traefik

This reconciles Flux, waits for the pod, and forwards HTTP (80), HTTPS (443), and the dashboard (8080) to localhost. Point *.k8s.local at 127.0.0.1 via /etc/hosts or dnsmasq and everything works.

Under the hood it runs:

Terminal window
kubectl port-forward -n traefik svc/traefik 80:80 443:443 &
kubectl port-forward -n traefik <traefik-pod> 8080:8080 &

Port-forward tunnels through the Kubernetes API server directly to the pod, independent of the cluster’s network stack. It works on every Kubernetes distribution, every CNI plugin, every load balancer. The only downside is that it’s imperative — it dies when you close the terminal.

The load balancer implementation is not declared in your Traefik manifests — it’s a separate component installed on the cluster. To find out which one:

Terminal window
# Check for MetalLB
kubectl get pods -n metallb-system
# Check for k3s ServiceLB (svclb pods in kube-system)
kubectl get pods -n kube-system | grep svclb
# Check for a cloud controller (GKE, EKS, AKS)
kubectl get pods -n kube-system | grep cloud-controller

This cluster uses MetalLB in L2 mode with an IP pool of 192.168.10.240-250:

Terminal window
kubectl get ipaddresspool -n metallb-system -o yaml

Alternative: Caddy as a localhost reverse proxy

Section titled “Alternative: Caddy as a localhost reverse proxy”

If you want declarative, centralized, per-route localhost control, one option is to run a lightweight Caddy instance on the host (outside Kubernetes) that listens on 127.0.0.1 and reverse-proxies to the MetalLB IP:

# /etc/caddy/Caddyfile (runs on host, not in cluster)
grafana.k8s.local {
reverse_proxy 192.168.10.241:443 {
transport http {
tls_insecure_skip_verify
}
}
bind 127.0.0.1
}

This gives you declarative, centralized, per-route localhost control, but it’s a second reverse proxy outside the cluster. Whether that complexity is worth it depends on the threat model.

Terminal window
git add infrastructure/ clusters/
git commit -m "Add Traefik via Flux with IngressRoutes for Grafana and dashboard"
git push
Terminal window
flux reconcile source git flux-system
flux reconcile kustomization infrastructure

Verify:

Terminal window
flux get helmreleases -A
kubectl get pods -n traefik
kubectl get ingressroute -A

A minimal nginx deployment behind basic auth to verify Let’s Encrypt is working on the remote cluster. Split into two files to keep secrets separate:

  • infrastructure/overlays/remote/tls-smoke-test.yaml — all non-sensitive resources
  • infrastructure/overlays/remote/tls-smoke-test-secret.yaml — htpasswd credentials (SOPS-encrypt before committing)

The Secret is in its own file so SOPS only encrypts the credentials, not the entire deployment. This follows the same pattern as the Grafana admin secret in infrastructure/base/monitoring/grafana-secret.yaml.

It creates:

ResourceFilePurpose
Namespace smoke-testtls-smoke-test.yamlIsolates the test from real workloads
ConfigMaptls-smoke-test.yamlInline HTML page (“TLS is working”)
Deploymenttls-smoke-test.yamlcaddy:alpine serving the HTML
Servicetls-smoke-test.yamlExposes nginx internally on port 80
Middlewaretls-smoke-test.yamlTraefik basicAuth — checks credentials before forwarding
IngressRoutetls-smoke-test.yamlRoutes smoke.example.com → nginx, with auth + certResolver: letsencrypt
Secrettls-smoke-test-secret.yamlhtpasswd credentials (SOPS-encrypted)

Before deploying to the remote cluster:

  1. Replace smoke.example.com in the IngressRoute with your actual domain
  2. Regenerate the password hash: htpasswd -nb admin yourpassword
  3. Update the users field in the Secret
  4. Encrypt the secret (see Secrets Management): sops --encrypt --in-place infrastructure/overlays/remote/tls-smoke-test-secret.yaml
  5. Point DNS for that domain to the remote Traefik LoadBalancer IP

Verify after deployment:

Terminal window
kubectl get pods -n smoke-test
curl -u admin:smoketest https://smoke.example.com

If you see “TLS is working” and the certificate is valid, Let’s Encrypt is configured correctly. Once verified, remove the smoke test by deleting tls-smoke-test.yaml from the remote overlay’s kustomization.yaml and pushing — Flux will clean it up (prune: true).

  1. https://doc.traefik.io/traefik/reference/install-configuration/api-dashboard/