Skip to content

Container Commands to kubectl

This reference maps container workflows to their Kubernetes equivalents. Some concepts translate directly. Others split across multiple Kubernetes primitives — a single docker run touches Deployments, Services, ConfigMaps, and Secrets in Kubernetes.

ContainerKubernetes
podman run -d --name app image:tagCreate a Deployment manifest, apply with kubectl apply -f
podman run -it --rm image:tag shkubectl run tmp --rm -it --image=image:tag -- sh
docker compose up -dkubectl apply -k overlays/dev/ or flux reconcile kustomization <name>
docker compose up --buildBuild image, push to registry, then kubectl rollout restart deployment/<name>
podman start <container>kubectl scale deployment/<name> --replicas=1 (if scaled to 0)
podman stop <container>kubectl scale deployment/<name> --replicas=0
podman restart <container>kubectl rollout restart deployment/<name> -n <ns>
docker compose restartkubectl rollout restart deployment/<name> -n <ns> (per deployment)
docker compose restart <service>kubectl rollout restart deployment/<service> -n <ns>
docker compose downkubectl delete -k overlays/dev/ or remove from Flux kustomization
docker compose pullKubernetes pulls images on pod creation; force with kubectl rollout restart
ContainerKubernetes
podman logs <container>kubectl logs deployment/<name> -n <ns>
podman logs -f <container>kubectl logs -f deployment/<name> -n <ns>
podman logs --tail 100 <container>kubectl logs --tail=100 deployment/<name> -n <ns>
docker compose logskubectl logs -l app=<label> -n <ns> --all-containers
docker compose logs -f <service>kubectl logs -f deployment/<service> -n <ns>

k9s: select a pod, press l for logs. Press 0 for all lines from the start, 19 for the last N hundred lines.

ContainerKubernetes
podman exec -it <container> shkubectl exec -it deployment/<name> -n <ns> -- sh
podman exec -it <container> bashkubectl exec -it deployment/<name> -n <ns> -- bash
docker compose exec <service> shkubectl exec -it deployment/<service> -n <ns> -- sh

k9s: select a pod, press s for shell.

For multi-container pods, specify the container:

Terminal window
kubectl exec -it pod/<name> -c <container> -n <ns> -- sh
ContainerKubernetes
podman pskubectl get pods -n <ns>
podman ps -akubectl get pods -n <ns> --show-all
docker compose pskubectl get pods -n <ns> -l app=<label>
podman inspect <container>kubectl describe pod/<name> -n <ns>
podman inspect --format '{{.State.Status}}'kubectl get pod/<name> -o jsonpath='{.status.phase}'
podman imagesImages are not cluster-scoped; check your registry
podman port <container>kubectl get svc -n <ns>
docker compose topkubectl top pods -n <ns> (requires metrics-server)

k9s: press d to describe, y for full YAML.

ContainerKubernetes
podman run -e KEY=valueSet in Deployment manifest under env:
podman run --env-file .envCreate a ConfigMap or Secret, reference with envFrom:
docker compose environment: blockConfigMap for plain values, Secret for sensitive values
Edit .env and docker compose restartkubectl edit configmap/<name> then kubectl rollout restart deployment/<name>
Terminal window
# Create a ConfigMap from an env file
kubectl create configmap app-config --from-env-file=.env -n <ns>
# Create a Secret from literals
kubectl create secret generic app-secrets --from-literal=DB_PASS=hunter2 -n <ns>

In this project, secrets use SOPS encryption. Edit with mise run edit <file>.

ContainerKubernetes
podman run -v /host:/containerhostPath volume in pod spec (avoid in production)
podman run -v name:/containerPersistentVolumeClaim mounted in pod spec
podman volume createCreate a PersistentVolumeClaim manifest
podman volume lskubectl get pvc -n <ns>
podman volume rmkubectl delete pvc/<name> -n <ns>
docker compose volumes: blockPVC manifests in the kustomization overlay
podman cp file container:/pathkubectl cp file pod/<name>:/path -n <ns>

k9s: press Shift+B to jump to PVCs.

ContainerKubernetes
podman run -p 8080:80Service with port: 8080, targetPort: 80
docker compose ports: blockService manifest per deployment
podman network createNamespaces provide network isolation; NetworkPolicies add fine-grained rules
podman network lskubectl get networkpolicies -A
Container-to-container by service name<service>.<namespace>.svc.cluster.local DNS
localhost between compose servicesContainers in the same pod share localhost; across pods use Service DNS
Reverse proxy (nginx/caddy in compose)IngressRoute (Traefik) or Ingress resource

The compose equivalent of checking which ports map where is inspecting IngressRoutes:

Terminal window
# List all routes
kubectl get ingressroutes -A
# Show hostnames
kubectl get ingressroutes -A -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}: {.spec.routes[*].match}{"\n"}{end}'

k9s: press Shift+L to jump to IngressRoutes, select one and press y to see the hostname match rule.

ContainerKubernetes
docker compose up --scale web=3kubectl scale deployment/<name> --replicas=3 -n <ns>
Single instance (compose default)replicas: 1 in Deployment manifest

k9s: navigate to :deploy, select a deployment, press s to scale.

ContainerKubernetes
podman build -t app:latest .Build locally, push to registry, update Deployment image
docker compose buildNo direct equivalent — build is separate from deploy
docker compose up --buildBuild, push, then kubectl rollout restart or update the image tag

For local-only images (like the docs site), set imagePullPolicy: Never in the Deployment and load the image into the cluster’s container runtime directly:

Terminal window
podman build -t cluster-docs:latest --target production docs/
sudo k0s ctr images rm docker.io/library/cluster-docs:latest localhost/cluster-docs:latest
podman save cluster-docs:latest | sudo k0s ctr images import -
sudo k0s ctr images tag localhost/cluster-docs:latest docker.io/library/cluster-docs:latest
kubectl rollout restart deployment/docs -n docs

containerd does not overwrite tags. You must k0s ctr images rm the old tag before importing. Without this step, containerd keeps the stale image digest and every new pod launches the old image — even though the import reports success.

Retag after import. Podman exports as localhost/cluster-docs:latest, but Kubernetes resolves a bare image: cluster-docs:latest to docker.io/library/cluster-docs:latest. The ctr images tag step creates the alias the kubelet actually looks for.

ContainerKubernetes
HEALTHCHECK CMD curl -f http://localhost/livenessProbe and readinessProbe in pod spec
docker compose healthcheck: blockProbes support httpGet, exec, and tcpSocket
podman inspect health statuskubectl describe pod/<name> shows probe results
ContainerKubernetes
podman rm <container>kubectl delete pod/<name> -n <ns> (Deployment recreates it)
podman rm -f <container>kubectl delete pod/<name> --grace-period=0 --force
docker compose down -vkubectl delete -k overlays/dev/ and kubectl delete pvc -l app=<label>
podman system pruneNo equivalent — Kubernetes garbage-collects terminated pods automatically
podman image pruneManaged by the container runtime’s garbage collection settings

In this project Flux manages deployments declaratively. The compose workflow of editing a file and running docker compose up maps to editing a manifest and pushing to git:

Compose workflowFlux workflow
Edit docker-compose.yaml, run docker compose upEdit manifest, git push, Flux reconciles automatically
docker compose pull && docker compose upUpdate image tag in manifest, push, Flux applies
docker compose downRemove manifests from git, push, Flux deletes resources
Force re-deployflux reconcile kustomization <name>
Check statusflux get kustomizations -A and flux get helmreleases -A

k9s: press Shift+K for Flux Kustomizations, Shift+H for HelmReleases.