Skip to content

Security Linting Tools

Kubernetes accepts valid YAML without complaint. A deployment running as root with no resource limits, a mutable filesystem, and no network policy will schedule fine. Security problems surface later — in incidents, audits, or postmortems.

In a GitOps workflow, Flux reconciles the cluster against what is in git. Manifests that pass linting in CI reach the cluster unchanged — there is no manual kubectl apply step where someone could sneak in an unreviewed change. The cluster state matches the repository state.

This makes two categories of tooling less valuable:

  • Live cluster auditing (kubescape/trivy scanning the running cluster) catches drift between what was deployed and what should be deployed. With Flux holding the cluster in lockstep with git, that drift does not happen. The manifests in the repo are the cluster state.
  • In-cluster admission control (Kyverno as an admission controller) blocks bad resources at the API server. But if linters already reject bad manifests before they merge, bad resources never reach the API server. Admission control becomes a redundant gate.

Static linting before merge is the highest-leverage investment. Everything else is defense in depth that we can add later if the threat model demands it.

The real gap: image signature verification

Section titled “The real gap: image signature verification”

The linters above check manifest structure — whether your YAML follows best practices. What they cannot check is whether the container images referenced in those manifests are the images you actually built. A compromised registry, a typo in an image tag, or a supply chain attack that replaces an image after it was pushed — none of these show up in a YAML lint.

Cosign solves this. It signs container images at build time and verifies signatures before deploy. Cosign supports two signing modes:

  • Keyless signing with GitHub OIDC — no keys to manage. GitHub Actions provides an identity token, and cosign signs against the Sigstore transparency log. Verification checks the signing identity and OIDC issuer.
  • Key-based signing with a cosign key pair — you manage the keys, but verification is simpler.

Sign images in CI after building them, then add a cosign verify step before Flux deploys:

Terminal window
# Sign after build (keyless, in GitHub Actions)
cosign sign --yes ghcr.io/myorg/myapp@$DIGEST
# Verify before deploy (in CI or locally)
cosign verify ghcr.io/myorg/myapp@$DIGEST \
--certificate-identity=https://github.com/myorg/myapp/.github/workflows/build.yml@refs/heads/main \
--certificate-oidc-issuer=https://token.actions.githubusercontent.com

This keeps verification in the CI pipeline where the other linters run — no in-cluster admission controller needed. Adding cosign to CI closes the supply chain gap without introducing infrastructure that could block deployments if it breaks.

ToolSetupSecurity benefitDaily usage
kube-linterTrivial — already installed via mise. Optional .kube-linter.yaml configCatches ~80% of common mistakes: missing resource limits, root containers, latest tags, absent network policiesOne command, runs in milliseconds. Add to CI and forget
trivy configEasy — single binary, no config neededDifferent rule set — embedded secrets, Dockerfile issues, CIS misconfigs. Complements kube-linterOne command. Add to CI alongside kube-linter
kubescapeEasy — single binary, no config neededMaps to compliance frameworks (NSA-CISA, CIS, MITRE ATT&CK). Produces a trackable risk scoreVerbose output — run periodically, not on every save
Kyverno CLIEasy — single binary + policy YAML files in the repoCustom policies in declarative YAML. Catches anything the other linters miss with project-specific ruleskyverno apply per policy, or kyverno test in CI
cosignEasy — single binary. Keyless mode needs no key managementVerifies image signatures — the one thing manifest linters cannot check. Closes the supply chain gapSign in CI after build, verify before deploy
FalcoHard — HelmRelease + eBPF driver + rule tuningCatches runtime threats no static tool can see: shells in containers, privilege escalation, sensitive file readsOngoing tuning. Defer until you need runtime detection
CI / Pre-merge Deferred
────────────── ────────
kube-linter (broad checks) Falco (runtime — add when threat model requires it)
trivy config (secrets, CIS)
kubescape (compliance score)
Kyverno CLI (custom policies)
cosign verify (image signatures)

All five tools run in CI. The manifest linters analyze YAML on disk — no cluster required. Cosign verifies image signatures against the registry. Flux ensures that only merged, linted manifests reach the cluster.

The fastest and broadest linter. It checks for missing security contexts, absent resource limits, latest image tags, host namespace usage, missing network policies, and pods on the default service account. It runs in milliseconds against any directory of manifests or piped Kustomize output.

Covers a different rule set. It catches embedded secrets, Dockerfile misconfigurations, and CIS benchmark violations in Kubernetes manifests, Terraform files, and Helm charts. Run it alongside kube-linter — the overlap is small enough that both add value.

Maps manifests against published compliance frameworks: the NSA-CISA Kubernetes Hardening Guide, MITRE ATT&CK for Containers, and the CIS Kubernetes Benchmark. It produces a risk score from 0 to 100. The score gives you a single number to track improvement over time. More verbose than kube-linter, so it works better as a periodic audit than a CI gate.

Kyverno is better known as an in-cluster admission controller, but its CLI works as a standalone linter. Write policies as declarative YAML — the same format used for in-cluster enforcement — and apply them to manifests locally.

This fills the gap the other linters leave: project-specific rules. kube-linter, trivy, and kubescape check against generic best practices. Kyverno CLI lets you enforce your own standards — naming conventions, required labels, allowed registries, or anything else expressible as a Kyverno policy.

Two commands:

Terminal window
# Apply a single policy to a resource
kyverno apply policy.yaml --resource manifest.yaml
# Run a test suite with expected results (for CI)
kyverno test .

kyverno test reads a kyverno-test.yaml file that declares which policies to apply, which resources to test, and what the expected outcome is (pass or fail). This makes policy validation deterministic and CI-friendly.

kyverno-test.yaml
apiVersion: cli.kyverno.io/v1alpha1
kind: Test
metadata:
name: security-checks
policies:
- policies/require-non-root.yaml
- policies/require-resource-limits.yaml
resources:
- manifests/deployment.yaml
results:
- policy: require-run-as-non-root
rule: run-as-non-root
resources:
- myapp
kind: Pod
result: pass

kube-linter is already available through mise. For the others:

Terminal window
mise use aqua:aquasecurity/trivy
mise use aqua:kubescape/kubescape
mise use aqua:kyverno/kyverno
mise use aqua:sigstore/cosign
Terminal window
# kube-linter — lint raw manifests or Kustomize output
kube-linter lint ./manifests/
kustomize build infrastructure/base | kube-linter lint -
# trivy — scan for misconfigs and embedded secrets
trivy config ./manifests/
trivy config --severity HIGH,CRITICAL ./manifests/
# kubescape — compliance scan against NSA-CISA framework
kubescape scan ./manifests/
kubescape scan framework nsa ./manifests/
# Kyverno CLI — apply custom policies
kyverno apply policies/ --resource manifests/
kyverno test .
# cosign — verify image signatures (keyless, GitHub Actions identity)
cosign verify ghcr.io/myorg/myapp:latest \
--certificate-identity=https://github.com/myorg/myapp/.github/workflows/build.yml@refs/heads/main \
--certificate-oidc-issuer=https://token.actions.githubusercontent.com

Kyverno can run as a Kubernetes admission controller that blocks bad resources at the API server. We are not using it in-cluster for two reasons:

  1. Redundant with Flux + CI linting. Flux reconciles from git. If manifests pass linting in CI, they reach the cluster unchanged. An admission controller that re-checks the same rules adds a redundant gate.
  2. Risk of blocking deployments. An admission controller sits in the critical path. If Kyverno goes down, misconfigures, or a policy has an unintended match, it blocks all deployments — including the fix. In a small cluster without dedicated SRE, that risk outweighs the benefit.

Cosign image verification in CI closes the one gap that linters leave open, without putting anything in the deployment path that can break.

If you do decide to deploy Kyverno in-cluster later, the kyverno-policies Helm chart ships a curated set of policies implementing the Kubernetes Pod Security Standards — no hand-written ClusterPolicy manifests needed:

Terminal window
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
helm install kyverno kyverno/kyverno -n kyverno --create-namespace
helm install kyverno-policies kyverno/kyverno-policies \
-n kyverno --set policyGroups=pod-security

The policyGroups=pod-security value scopes the install to Pod Security Standard policies. By default they run in Audit mode (log violations but do not block). To enforce:

Terminal window
helm install kyverno-policies kyverno/kyverno-policies -n kyverno \
--set policyGroups=pod-security \
--set validationFailureAction=Enforce

This gives you the essential policies (non-root, resource limits, read-only root filesystem, no privileged containers, required labels) and more, with minimal configuration. The same policy definitions can be used locally via kyverno apply with the CLI.

Falco uses eBPF to observe system calls inside containers. It detects shells spawned in containers, reads of /etc/shadow, unexpected outbound connections, privilege escalation attempts, and crypto mining indicators. It requires a Linux kernel ≥ 5.8 and ongoing rule tuning. The default ruleset generates false positives — expect to spend time silencing noise before it becomes useful. Add it when your threat model requires runtime detection.

See Security Linting and Policy Enforcement for full details on both, including Kyverno ClusterPolicy manifests and Falco installation.

Start here

Run kube-linter on your manifests. Already installed, catches the most common problems in under a second. Add to CI.

Add coverage

Install trivy and kubescape. Ten minutes of setup. Trivy adds secret detection and Dockerfile scanning. Kubescape adds compliance scoring.

Custom policies

Write Kyverno CLI policies for project-specific rules. Same YAML format used for in-cluster enforcement, so policies are reusable if you later deploy Kyverno to the cluster.

Close the real gap

Sign images with cosign in CI and verify before deploy. This is the one thing manifest linting cannot cover — and it stays in the CI pipeline, not in the cluster’s critical path.

When needed

Add Falco for runtime threat detection. Hardest to tune and maintain, but catches threats no static tool can see.