Skip to content

Local DNS for *.k8s.local

The /etc/hosts approach works for a handful of domains, but every new IngressRoute means another line to add manually. A local DNS server handles *.k8s.local with a single wildcard rule and survives new services without edits.

This guide uses dnsmasq behind systemd-resolved. Resolved continues handling all normal DNS — dnsmasq only sees queries for k8s.local.

Terminal window
# Fedora
sudo dnf install dnsmasq
# Debian/Ubuntu
sudo apt install dnsmasq

systemd-resolved already owns port 53 on 127.0.0.53, so dnsmasq listens on a different loopback address.

Terminal window
sudo tee /etc/dnsmasq.d/k8s-local.conf << 'EOF'
# Listen on a separate loopback address
listen-address=127.0.0.2
bind-interfaces
# Don't forward unknown queries upstream (only answer what we know)
no-resolv
# Individual records
address=/docs.k8s.local/127.0.0.1
address=/grafana.k8s.local/127.0.0.1
address=/traefik.k8s.local/127.0.0.1
address=/pgweb.k8s.local/127.0.0.1
# Or wildcard: send ALL *.k8s.local to one IP
# address=/k8s.local/127.0.0.1
EOF

The wildcard form (address=/k8s.local/127.0.0.1) matches every subdomain. Use individual records if different services resolve to different IPs, or the wildcard if everything goes through the same Traefik port-forward on localhost.

Terminal window
sudo systemctl restart dnsmasq
sudo systemctl enable dnsmasq

Tell systemd-resolved to send .k8s.local queries to the dnsmasq instance:

Terminal window
sudo resolvectl dns lo 127.0.0.2
sudo resolvectl domain lo "~k8s.local"

The ~ prefix means “routing domain” — resolved sends any query matching *.k8s.local to the DNS server on that interface (127.0.0.2), but doesn’t add k8s.local as a search domain.

If you want short names like docs to resolve as docs.k8s.local, drop the tilde:

Terminal window
sudo resolvectl domain lo "k8s.local"

Without ~, it acts as both a routing domain and a search domain. Now curl http://docs resolves as docs.k8s.local.

The resolvectl commands don’t survive a reboot. Create a systemd-networkd config:

Terminal window
sudo tee /etc/systemd/network/10-k8s-local-dns.network << 'EOF'
[Match]
Name=lo
[Network]
DNS=127.0.0.2
Domains=~k8s.local
EOF

Restart networking:

Terminal window
sudo systemctl restart systemd-networkd
Terminal window
# Check resolved sees the routing domain
resolvectl status lo
# Test a lookup
resolvectl query docs.k8s.local

The query should resolve via 127.0.0.2 with the IP configured in dnsmasq.

Edit /etc/dnsmasq.d/k8s-local.conf and restart dnsmasq:

Terminal window
sudo systemctl restart dnsmasq

No need to touch /etc/hosts, resolved config, or the networkd file again. If you used the wildcard rule, new subdomains resolve immediately with no changes at all.

Alternative: cluster-side DNS with CoreDNS

Section titled “Alternative: cluster-side DNS with CoreDNS”

Instead of running dnsmasq on the host, deploy a DNS server inside the cluster and point your machine at it. This gives you automatic resolution of Kubernetes service names — hello-app.demo.svc.cluster.local resolves without any host-side configuration changes when you add new services.

The CoreDNS Helm chart is maintained by the CoreDNS project (CNCF graduated). You deploy a second CoreDNS instance alongside the built-in one, configured as a NodePort service so your host can reach it.

Terminal window
helm repo add coredns https://coredns.github.io/helm
helm install coredns-external coredns/coredns \
--namespace kube-system \
--set isClusterService=false \
--set serviceType=NodePort \
--set service.clusterIP="" \
--set "service.nodePort=31053" \
--set "servers[0].zones[0].zone=." \
--set "servers[0].port=53" \
--set "servers[0].plugins[0].name=kubernetes" \
--set "servers[0].plugins[0].parameters=cluster.local in-addr.arpa ip6.arpa" \
--set "servers[0].plugins[1].name=forward" \
--set "servers[0].plugins[1].parameters=. /etc/resolv.conf"

This creates a CoreDNS instance that resolves *.cluster.local via the Kubernetes API and forwards everything else upstream. It listens on NodePort 31053.

Add extraPortMappings to your Kind cluster config so NodePort 31053 is accessible on localhost:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 31053
hostPort: 31053
protocol: UDP
- containerPort: 31053
hostPort: 31053
protocol: TCP

macOS uses resolver files in /etc/resolver/. Create one for cluster.local:

Terminal window
sudo mkdir -p /etc/resolver
sudo tee /etc/resolver/cluster.local << 'EOF'
nameserver 127.0.0.1
port 31053
EOF

macOS checks /etc/resolver/<domain> before the system DNS for matching queries. Any lookup ending in .cluster.local goes to the CoreDNS instance in your Kind cluster.

This file persists across reboots. Remove it when you no longer need cluster DNS resolution:

Terminal window
sudo rm /etc/resolver/cluster.local

Test a lookup against a known service. Every cluster has kubernetes.default.svc.cluster.local:

Terminal window
dig @127.0.0.1 -p 31053 kubernetes.default.svc.cluster.local

If you have services deployed, resolve them by their full DNS name:

Terminal window
# Format: <service>.<namespace>.svc.cluster.local
curl http://hello-app.demo.svc.cluster.local

New services resolve automatically — no config files to edit, no dnsmasq to restart. The cluster’s CoreDNS picks them up the moment the Service object exists.