Skip to content

Provisioning with OpenTofu

This tutorial provisions a remote VPS with OpenTofu and deploys a single-node k0s cluster on it. By the end you will have a publicly reachable Kubernetes cluster running Flux, ready for the same GitOps workflow you used locally.

OpenTofu is an open-source fork of Terraform, licensed under MPL-2.0. The CLI is tofu instead of terraform, but the subcommands are identical — tofu init, tofu plan, tofu apply, tofu destroy. It works with every Terraform provider.

One feature Terraform lacks: OpenTofu supports client-side state encryption. You can encrypt your .tfstate at rest with Age or AWS KMS without relying on a remote backend’s encryption.

Terminal window
# mise (recommended)
mise use opentofu@latest
# Or standalone
curl --proto '=https' --tlsv1.2 -fsSL https://get.opentofu.org/install-opentofu.sh | sh

Verify the installation:

Terminal window
tofu version

This tutorial uses DigitalOcean. Export your API token before running any tofu command:

Terminal window
export DIGITALOCEAN_TOKEN="dop_v1_..."

Create a tofu/digitalocean/ directory for the Tofu files.

terraform {
required_version = ">= 1.6.0"
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.44"
}
}
}
provider "digitalocean" {}

The provider reads DIGITALOCEAN_TOKEN from the environment automatically.

variable "region" {
description = "DigitalOcean region"
type = string
default = "nyc1"
}
variable "droplet_size" {
description = "Droplet size slug"
type = string
default = "s-2vcpu-4gb"
}
variable "domain_name" {
description = "Root domain (e.g. example.com)"
type = string
}
variable "ssh_key_name" {
description = "Name of an existing SSH key in your DO account"
type = string
}
# --- VPC ---
resource "digitalocean_vpc" "k8s" {
name = "k8s-vpc"
region = var.region
ip_range = "10.10.10.0/24"
}
# --- SSH Key (reference an existing key) ---
data "digitalocean_ssh_key" "default" {
name = var.ssh_key_name
}
# --- Droplet ---
resource "digitalocean_droplet" "k8s" {
name = "k8s-node"
image = "ubuntu-24-04-x64"
size = var.droplet_size
region = var.region
vpc_uuid = digitalocean_vpc.k8s.id
ssh_keys = [data.digitalocean_ssh_key.default.id]
tags = ["k8s", "k0s"]
}
# --- Firewall ---
resource "digitalocean_firewall" "k8s" {
name = "k8s-firewall"
droplet_ids = [digitalocean_droplet.k8s.id]
# SSH
inbound_rule {
protocol = "tcp"
port_range = "22"
source_addresses = ["0.0.0.0/0", "::/0"]
}
# HTTP (Let's Encrypt ACME challenges)
inbound_rule {
protocol = "tcp"
port_range = "80"
source_addresses = ["0.0.0.0/0", "::/0"]
}
# HTTPS (application traffic)
inbound_rule {
protocol = "tcp"
port_range = "443"
source_addresses = ["0.0.0.0/0", "::/0"]
}
# Kubernetes API server
inbound_rule {
protocol = "tcp"
port_range = "6443"
source_addresses = ["0.0.0.0/0", "::/0"]
}
# Konnectivity (controller-worker tunnel)
inbound_rule {
protocol = "tcp"
port_range = "8132"
source_addresses = ["0.0.0.0/0", "::/0"]
}
# k0s join API
inbound_rule {
protocol = "tcp"
port_range = "9443"
source_addresses = ["0.0.0.0/0", "::/0"]
}
# Allow all outbound
outbound_rule {
protocol = "tcp"
port_range = "1-65535"
destination_addresses = ["0.0.0.0/0", "::/0"]
}
outbound_rule {
protocol = "udp"
port_range = "1-65535"
destination_addresses = ["0.0.0.0/0", "::/0"]
}
outbound_rule {
protocol = "icmp"
destination_addresses = ["0.0.0.0/0", "::/0"]
}
}
# --- DNS ---
resource "digitalocean_domain" "main" {
name = var.domain_name
}
resource "digitalocean_record" "wildcard" {
domain = digitalocean_domain.main.id
type = "A"
name = "*"
value = digitalocean_droplet.k8s.ipv4_address
ttl = 300
}
resource "digitalocean_record" "root" {
domain = digitalocean_domain.main.id
type = "A"
name = "@"
value = digitalocean_droplet.k8s.ipv4_address
ttl = 300
}
output "droplet_ip" {
description = "Public IPv4 address of the Droplet"
value = digitalocean_droplet.k8s.ipv4_address
}
output "domain" {
description = "Root domain"
value = digitalocean_domain.main.name
}
PortPurpose
22SSH — k0sctl connects here to install k0s
80HTTP — Let’s Encrypt ACME challenge verification
443HTTPS — application traffic through Traefik
6443Kubernetes API server
8132Konnectivity — controller-to-worker tunnel
9443k0s join API — used when adding worker nodes
SizeRAMMonthlyUse case
s-1vcpu-2gb2 GB~$12Minimum viable — tight on memory
s-2vcpu-4gb4 GB~$24Comfortable for small workloads
s-4vcpu-8gb8 GB~$48Production-like with monitoring stack

For a single-node cluster running Flux, Traefik, and a few applications, s-2vcpu-4gb gives enough headroom without waste.

Terminal window
cd tofu/digitalocean
tofu init # download the DigitalOcean provider
tofu plan # preview what will be created
tofu apply # create the VPC, Droplet, firewall, and DNS records

tofu apply prints the Droplet IP and domain when it finishes. Note the IP — you need it for the next step.

k0sctl is a standalone tool that SSHes into your server and installs k0s. It handles bootstrapping the cluster from a single YAML file.

Terminal window
# Install k0sctl
brew install k0sproject/tap/k0sctl

Create a k0sctl.yaml in the project root:

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: remote-cluster
spec:
hosts:
- role: controller+worker
noTaints: true
ssh:
address: <DROPLET_IP>
user: root
keyPath: ~/.ssh/id_ed25519
k0s:
config:
apiVersion: k0s.k0sproject.io/v1beta1
kind: ClusterConfig
spec:
api:
externalAddress: <DROPLET_IP>
sans:
- <DROPLET_IP>
- k8s.example.com

Replace <DROPLET_IP> with the IP from tofu output. The controller+worker role runs both the control plane and workloads on a single node. noTaints: true removes the control-plane taint so pods can schedule on it.

The sans list tells k0s which names and IPs to include in the API server’s TLS certificate. Add both the IP and any DNS name you plan to use for kubectl access.

Apply the configuration:

Terminal window
# Install k0s on the remote server
k0sctl apply --config k0sctl.yaml
# Retrieve the kubeconfig
k0sctl kubeconfig --config k0sctl.yaml > ~/.kube/remote.conf
export KUBECONFIG=~/.kube/remote.conf
# Verify the node is ready
kubectl get nodes

With KUBECONFIG pointing at the remote cluster, bootstrap Flux the same way as locally (see Flux CD for details):

Terminal window
flux bootstrap github \
--owner=<your-user> \
--repository=<your-repo> \
--branch=main \
--path=clusters/remote \
--personal

This creates a clusters/remote/ path in your Git repository. Flux watches that path and reconciles everything it finds.

The remote cluster needs its own age keypair — never reuse the local cluster’s key. See Secrets Management for full details.

Terminal window
age-keygen | kubectl create secret generic sops-age \
--namespace=flux-system \
--from-file=age.agekey=/dev/stdin

Add the new public key as an additional recipient in .sops.yaml so secrets are encrypted to both clusters.

Point your domain’s nameservers to DigitalOcean:

  • ns1.digitalocean.com
  • ns2.digitalocean.com
  • ns3.digitalocean.com

The wildcard A record (*.example.com) resolves all subdomains to the Droplet’s IP. Traefik then routes requests to the correct service based on the Host header. No manual DNS entry per service — add an IngressRoute and it works.

Nameserver changes can take up to 48 hours to propagate, though most registrars complete within an hour or two.

If you use Cloudflare instead of DigitalOcean for DNS, swap the provider block and use the cloudflare provider. One important detail: set proxied = false on the record that points to port 6443 (the Kubernetes API). Cloudflare’s proxy only handles HTTP/HTTPS traffic — proxying the API server breaks kubectl.

When you are done with the remote cluster, tear it down in reverse order:

Terminal window
# Remove k0s from the server
k0sctl reset --config k0sctl.yaml
# Destroy all DigitalOcean resources
tofu destroy

k0sctl reset uninstalls k0s and cleans up the node. tofu destroy deletes the Droplet, VPC, firewall, and DNS records. Your Git repository is unaffected — re-run tofu apply and k0sctl apply to rebuild from scratch.