Skip to content

Traefik Ingress Setup

Traefik is the cluster ingress controller. It receives external HTTP and HTTPS traffic and routes it to backend Services using IngressRoute resources. This guide covers deploying Traefik via Flux, configuring TLS per environment, and exposing services through IngressRoutes.

The base manifests live in infrastructure/base/traefik/. Flux uses a HelmRepository to locate the chart and a HelmRelease to install it.

infrastructure/base/traefik/helmrepository.yaml
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: traefik
namespace: flux-system
spec:
interval: 1h0m0s
url: https://traefik.github.io/charts
infrastructure/base/traefik/helmrelease.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: traefik
namespace: flux-system
spec:
chart:
spec:
chart: traefik
version: 39.0.6
sourceRef:
kind: HelmRepository
name: traefik
interval: 5m0s
releaseName: traefik
targetNamespace: traefik
install:
createNamespace: true

Flux checks the chart repository every hour for new versions. Every five minutes it reconciles the running release against the desired state in Git — if the release drifts, Flux re-applies.

A kustomization.yaml in the same directory includes both files:

infrastructure/base/traefik/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- helmrepository.yaml
- helmrelease.yaml

Each cluster’s Flux Kustomization points at its overlay, not at base:

clusters/local/infrastructure.yaml
spec:
path: ./infrastructure/overlays/local
# clusters/remote/infrastructure.yaml
spec:
path: ./infrastructure/overlays/remote

The local cluster keeps Traefik off the network. Setting the Service type to ClusterIP prevents MetalLB from assigning an external IP. Ports remain on the Service (Traefik requires at least one) but are only reachable inside the cluster. The dashboard is enabled with insecure access — reachable only via kubectl port-forward.

infrastructure/overlays/local/traefik-patch.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: traefik
namespace: flux-system
spec:
values:
api:
dashboard: true
insecure: true
service:
type: ClusterIP
tlsStore:
default:
defaultCertificate:
secretName: mkcert-wildcard

The tlsStore sets the default certificate cluster-wide. Any IngressRoute with tls: {} uses this certificate automatically.

Generate a locally-trusted wildcard certificate and load it into the cluster:

Terminal window
mise run tls:mkcert-setup

This is idempotent. It installs the mkcert CA into the system trust store, generates a certificate covering *.k8s.local, *.k8s.lan, *.lan, and localhost, then creates or updates the mkcert-wildcard TLS secret in the traefik namespace.

Use the mise task to port-forward HTTP, HTTPS, and the dashboard to localhost:

Terminal window
mise run traefik

Under the hood this runs:

Terminal window
kubectl port-forward -n traefik svc/traefik 80:80 443:443 &
kubectl port-forward -n traefik <traefik-pod> 8080:8080 &

Point *.k8s.local to 127.0.0.1 in /etc/hosts or via dnsmasq. The port-forward dies when you close the terminal — there is no persistent equivalent in this setup.

The remote cluster uses a standard LoadBalancer service. The patch adds HTTP-to-HTTPS redirects, a Let’s Encrypt ACME resolver, and a persistent volume to store the ACME state file.

infrastructure/overlays/remote/traefik-patch.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: traefik
namespace: flux-system
spec:
values:
ports:
web:
redirections:
entryPoint:
to: websecure
scheme: https
permanent: true
persistence:
enabled: true
size: 128Mi
accessMode: ReadWriteOnce
certificatesResolvers:
letsencrypt:
acme:
email: certs@gmail.com
storage: /data/acme.json
httpChallenge:
entryPoint: web
deployment:
initContainers:
- name: volume-permissions
image: busybox:latest
command: ["sh", "-c", "touch /data/acme.json; chmod -v 600 /data/acme.json"]
volumeMounts:
- mountPath: /data
name: data
podSecurityContext:
fsGroup: 65532
fsGroupChangePolicy: "OnRootMismatch"

Let’s Encrypt uses HTTP challenge: it issues a request to http://<your-domain>/.well-known/acme-challenge/... to verify domain ownership. The web entrypoint (port 80) must be publicly reachable. The volume-permissions init container creates acme.json with mode 600 before Traefik starts — Let’s Encrypt rejects the file if permissions are too open.

An IngressRoute activates the ACME resolver by setting certResolver: letsencrypt in its tls block instead of tls: {}.

The dashboard is only enabled on the local cluster. It is not on the Service (expose.default: false on the traefik entrypoint) so it is unreachable from the network even on local.

Clusterapi.dashboardapi.insecureReachable via
localtruetruekubectl port-forward
remotefalsefalsenot exposed

Access on local:

Terminal window
kubectl port-forward -n traefik \
$(kubectl get pods -n traefik -l app.kubernetes.io/name=traefik -o name) 8080:8080
# Open http://127.0.0.1:8080/dashboard/ (trailing slash required)

IngressRoute is Traefik’s CRD for routing. Each one maps a hostname to a backend Service. The Traefik Helm chart installs the CRDs automatically.

Grafana is exposed on both clusters, so its IngressRoute lives in base:

infrastructure/base/monitoring/ingressroute-grafana.yaml
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: grafana
namespace: monitoring
spec:
entryPoints:
- websecure
routes:
- match: Host(`grafana.k8s.local`)
kind: Rule
services:
- name: kube-prometheus-grafana
port: 80
tls: {}

tls: {} uses the default certificate from the tlsStore — the mkcert wildcard on local, Let’s Encrypt on remote.

The dashboard IngressRoute lives in the local overlay:

infrastructure/overlays/local/ingressroute-dashboard.yaml
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: traefik-dashboard
namespace: traefik
spec:
entryPoints:
- websecure
routes:
- match: Host(`traefik.k8s.local`)
kind: Rule
services:
- name: api@internal
kind: TraefikService
tls: {}

api@internal is a built-in Traefik service that routes to the dashboard API.

After Flux deploys Traefik, find the LoadBalancer IP (remote) or use 127.0.0.1 (local):

Terminal window
kubectl get svc -n traefik traefik -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

Add entries to /etc/hosts:

<TRAEFIK_LB_IP> grafana.k8s.local traefik.k8s.local

After committing and pushing, reconcile and verify:

Terminal window
flux reconcile source git flux-system
flux reconcile kustomization infrastructure
flux get helmreleases -A
kubectl get pods -n traefik
kubectl get ingressroute -A