Traefik Ingress Setup
Traefik is the cluster ingress controller. It receives external HTTP and HTTPS traffic and routes it to backend Services using IngressRoute resources. This guide covers deploying Traefik via Flux, configuring TLS per environment, and exposing services through IngressRoutes.
Flux manifests
Section titled “Flux manifests”The base manifests live in infrastructure/base/traefik/. Flux uses a HelmRepository to locate the chart and a HelmRelease to install it.
apiVersion: source.toolkit.fluxcd.io/v1kind: HelmRepositorymetadata: name: traefik namespace: flux-systemspec: interval: 1h0m0s url: https://traefik.github.io/chartsapiVersion: helm.toolkit.fluxcd.io/v2kind: HelmReleasemetadata: name: traefik namespace: flux-systemspec: chart: spec: chart: traefik version: 39.0.6 sourceRef: kind: HelmRepository name: traefik interval: 5m0s releaseName: traefik targetNamespace: traefik install: createNamespace: trueFlux checks the chart repository every hour for new versions. Every five minutes it reconciles the running release against the desired state in Git — if the release drifts, Flux re-applies.
A kustomization.yaml in the same directory includes both files:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - helmrepository.yaml - helmrelease.yamlEach cluster’s Flux Kustomization points at its overlay, not at base:
spec: path: ./infrastructure/overlays/local
# clusters/remote/infrastructure.yamlspec: path: ./infrastructure/overlays/remoteLocal overlay
Section titled “Local overlay”The local cluster keeps Traefik off the network. Setting the Service type to ClusterIP prevents MetalLB from assigning an external IP. Ports remain on the Service (Traefik requires at least one) but are only reachable inside the cluster. The dashboard is enabled with insecure access — reachable only via kubectl port-forward.
apiVersion: helm.toolkit.fluxcd.io/v2kind: HelmReleasemetadata: name: traefik namespace: flux-systemspec: values: api: dashboard: true insecure: true service: type: ClusterIP tlsStore: default: defaultCertificate: secretName: mkcert-wildcardThe tlsStore sets the default certificate cluster-wide. Any IngressRoute with tls: {} uses this certificate automatically.
mkcert TLS
Section titled “mkcert TLS”Generate a locally-trusted wildcard certificate and load it into the cluster:
mise run tls:mkcert-setupThis is idempotent. It installs the mkcert CA into the system trust store, generates a certificate covering *.k8s.local, *.k8s.lan, *.lan, and localhost, then creates or updates the mkcert-wildcard TLS secret in the traefik namespace.
Accessing services locally
Section titled “Accessing services locally”Use the mise task to port-forward HTTP, HTTPS, and the dashboard to localhost:
mise run traefikUnder the hood this runs:
kubectl port-forward -n traefik svc/traefik 80:80 443:443 &kubectl port-forward -n traefik <traefik-pod> 8080:8080 &Point *.k8s.local to 127.0.0.1 in /etc/hosts or via dnsmasq. The port-forward dies when you close the terminal — there is no persistent equivalent in this setup.
Remote overlay
Section titled “Remote overlay”The remote cluster uses a standard LoadBalancer service. The patch adds HTTP-to-HTTPS redirects, a Let’s Encrypt ACME resolver, and a persistent volume to store the ACME state file.
apiVersion: helm.toolkit.fluxcd.io/v2kind: HelmReleasemetadata: name: traefik namespace: flux-systemspec: values: ports: web: redirections: entryPoint: to: websecure scheme: https permanent: true persistence: enabled: true size: 128Mi accessMode: ReadWriteOnce certificatesResolvers: letsencrypt: acme: email: certs@gmail.com storage: /data/acme.json httpChallenge: entryPoint: web deployment: initContainers: - name: volume-permissions image: busybox:latest command: ["sh", "-c", "touch /data/acme.json; chmod -v 600 /data/acme.json"] volumeMounts: - mountPath: /data name: data podSecurityContext: fsGroup: 65532 fsGroupChangePolicy: "OnRootMismatch"Let’s Encrypt uses HTTP challenge: it issues a request to http://<your-domain>/.well-known/acme-challenge/... to verify domain ownership. The web entrypoint (port 80) must be publicly reachable. The volume-permissions init container creates acme.json with mode 600 before Traefik starts — Let’s Encrypt rejects the file if permissions are too open.
An IngressRoute activates the ACME resolver by setting certResolver: letsencrypt in its tls block instead of tls: {}.
Dashboard access
Section titled “Dashboard access”The dashboard is only enabled on the local cluster. It is not on the Service (expose.default: false on the traefik entrypoint) so it is unreachable from the network even on local.
| Cluster | api.dashboard | api.insecure | Reachable via |
|---|---|---|---|
| local | true | true | kubectl port-forward |
| remote | false | false | not exposed |
Access on local:
kubectl port-forward -n traefik \ $(kubectl get pods -n traefik -l app.kubernetes.io/name=traefik -o name) 8080:8080# Open http://127.0.0.1:8080/dashboard/ (trailing slash required)IngressRoutes
Section titled “IngressRoutes”IngressRoute is Traefik’s CRD for routing. Each one maps a hostname to a backend Service. The Traefik Helm chart installs the CRDs automatically.
Grafana (shared)
Section titled “Grafana (shared)”Grafana is exposed on both clusters, so its IngressRoute lives in base:
apiVersion: traefik.io/v1alpha1kind: IngressRoutemetadata: name: grafana namespace: monitoringspec: entryPoints: - websecure routes: - match: Host(`grafana.k8s.local`) kind: Rule services: - name: kube-prometheus-grafana port: 80 tls: {}tls: {} uses the default certificate from the tlsStore — the mkcert wildcard on local, Let’s Encrypt on remote.
Traefik dashboard (local only)
Section titled “Traefik dashboard (local only)”The dashboard IngressRoute lives in the local overlay:
apiVersion: traefik.io/v1alpha1kind: IngressRoutemetadata: name: traefik-dashboard namespace: traefikspec: entryPoints: - websecure routes: - match: Host(`traefik.k8s.local`) kind: Rule services: - name: api@internal kind: TraefikService tls: {}api@internal is a built-in Traefik service that routes to the dashboard API.
After Flux deploys Traefik, find the LoadBalancer IP (remote) or use 127.0.0.1 (local):
kubectl get svc -n traefik traefik -o jsonpath='{.status.loadBalancer.ingress[0].ip}'Add entries to /etc/hosts:
<TRAEFIK_LB_IP> grafana.k8s.local traefik.k8s.localVerification
Section titled “Verification”After committing and pushing, reconcile and verify:
flux reconcile source git flux-systemflux reconcile kustomization infrastructure
flux get helmreleases -Akubectl get pods -n traefikkubectl get ingressroute -A