Skip to content

Fedora (Server / Workstation)

This section covers standard, mutable Fedora installations: Fedora Server, Fedora Workstation, Fedora Cloud (AWS, GCP, etc.), and similar spins. If you run Silverblue or any Atomic variant, skip to the next section.

Terminal window
curl --proto '=https' --tlsv1.2 -sSf https://get.k0s.sh | sudo sh

This drops the k0s binary into /usr/local/bin/. You can verify:

Terminal window
k0s version

If you prefer not to pipe curl to sh, download the binary directly:

Terminal window
K0S_VERSION=$(curl -sSf https://docs.k0sproject.io/stable.txt)
curl -sSfL "https://github.com/k0sproject/k0s/releases/download/${K0S_VERSION}/k0s-${K0S_VERSION}-amd64" \
-o /usr/local/bin/k0s
chmod +x /usr/local/bin/k0s

Fedora ships firewalld by default. Open the ports k0s needs:

Terminal window
sudo firewall-cmd --permanent --add-port=6443/tcp # API server
sudo firewall-cmd --permanent --add-port=2380/tcp # etcd peers
sudo firewall-cmd --permanent --add-port=9443/tcp # k0s join API
sudo firewall-cmd --permanent --add-port=8132/tcp # konnectivity
sudo firewall-cmd --permanent --add-port=10250/tcp # kubelet
sudo firewall-cmd --permanent --add-port=179/tcp # kube-router BGP
# Trust the CNI bridge interface so pods can reach the host
sudo firewall-cmd --permanent --zone=trusted --add-interface=kube-bridge
# Enable masquerading for pod egress
sudo firewall-cmd --permanent --add-masquerade
sudo firewall-cmd --reload

If you run a dedicated worker node (no control plane), you only need ports 10250 and 179.

Fedora enables SELinux in enforcing mode. Install the container SELinux policy:

Terminal window
sudo dnf install -y container-selinux

Then create a containerd config snippet so k0s’s embedded containerd enables SELinux labeling:

Terminal window
sudo mkdir -p /etc/k0s/containerd.d
cat <<'EOF' | sudo tee /etc/k0s/containerd.d/selinux.toml
[plugins."io.containerd.grpc.v1.cri"]
enable_selinux = true
EOF

K0s works without a config file. It applies sensible defaults: etcd for storage, kube-router for CNI, standard CIDRs. If you want to customize, generate the defaults and edit:

Terminal window
sudo mkdir -p /etc/k0s
k0s config create | sudo tee /etc/k0s/k0s.yaml

Single node (controller + worker):

Terminal window
sudo k0s install controller --enable-worker --no-taints
sudo k0s start

The --enable-worker flag runs a kubelet on the same node as the control plane. The --no-taints flag lets workloads schedule on this node. If you pass a custom config, add -c /etc/k0s/k0s.yaml.

Dedicated controller (multi-node cluster):

Terminal window
sudo k0s install controller -c /etc/k0s/k0s.yaml
sudo k0s start

Worker node (joins an existing cluster):

First, generate a join token on the controller:

Terminal window
sudo k0s token create --role worker > join-token

Copy that token to the worker node, then:

Terminal window
sudo mkdir -p /etc/k0s
sudo cp join-token /etc/k0s/join-token
sudo k0s install worker --token-file /etc/k0s/join-token
sudo k0s start
Terminal window
sudo k0s status
sudo k0s kubectl get nodes

The node should show Ready within a minute or two.

k0s install creates a systemd unit (k0scontroller.service or k0sworker.service). It starts on boot automatically. Standard systemd commands work:

Terminal window
sudo k0s stop
sudo k0s start
sudo systemctl status k0scontroller

Export the admin kubeconfig:

Terminal window
sudo k0s kubeconfig admin > ~/.kube/config
chmod 600 ~/.kube/config
kubectl get nodes