Start here
Run kube-linter on your manifests. Already installed, catches the most common problems in under a second. Add to CI.
Kubernetes accepts valid YAML without complaint. A deployment running as root with no resource limits, a mutable filesystem, and no network policy will schedule fine. Security problems surface later — in incidents, audits, or postmortems.
In a GitOps workflow, Flux reconciles the cluster against what is in git. Manifests that pass linting in CI reach the cluster unchanged — there is no manual kubectl apply step where someone could sneak in an unreviewed change. The cluster state matches the repository state.
This makes two categories of tooling less valuable:
Static linting before merge is the highest-leverage investment. Everything else is defense in depth that we can add later if the threat model demands it.
The linters above check manifest structure — whether your YAML follows best practices. What they cannot check is whether the container images referenced in those manifests are the images you actually built. A compromised registry, a typo in an image tag, or a supply chain attack that replaces an image after it was pushed — none of these show up in a YAML lint.
Cosign solves this. It signs container images at build time and verifies signatures before deploy. Cosign supports two signing modes:
Sign images in CI after building them, then add a cosign verify step before Flux deploys:
# Sign after build (keyless, in GitHub Actions)cosign sign --yes ghcr.io/myorg/myapp@$DIGEST
# Verify before deploy (in CI or locally)cosign verify ghcr.io/myorg/myapp@$DIGEST \ --certificate-identity=https://github.com/myorg/myapp/.github/workflows/build.yml@refs/heads/main \ --certificate-oidc-issuer=https://token.actions.githubusercontent.comThis keeps verification in the CI pipeline where the other linters run — no in-cluster admission controller needed. Adding cosign to CI closes the supply chain gap without introducing infrastructure that could block deployments if it breaks.
| Tool | Setup | Security benefit | Daily usage |
|---|---|---|---|
| kube-linter | Trivial — already installed via mise. Optional .kube-linter.yaml config | Catches ~80% of common mistakes: missing resource limits, root containers, latest tags, absent network policies | One command, runs in milliseconds. Add to CI and forget |
| trivy config | Easy — single binary, no config needed | Different rule set — embedded secrets, Dockerfile issues, CIS misconfigs. Complements kube-linter | One command. Add to CI alongside kube-linter |
| kubescape | Easy — single binary, no config needed | Maps to compliance frameworks (NSA-CISA, CIS, MITRE ATT&CK). Produces a trackable risk score | Verbose output — run periodically, not on every save |
| Kyverno CLI | Easy — single binary + policy YAML files in the repo | Custom policies in declarative YAML. Catches anything the other linters miss with project-specific rules | kyverno apply per policy, or kyverno test in CI |
| cosign | Easy — single binary. Keyless mode needs no key management | Verifies image signatures — the one thing manifest linters cannot check. Closes the supply chain gap | Sign in CI after build, verify before deploy |
| Falco | Hard — HelmRelease + eBPF driver + rule tuning | Catches runtime threats no static tool can see: shells in containers, privilege escalation, sensitive file reads | Ongoing tuning. Defer until you need runtime detection |
CI / Pre-merge Deferred────────────── ────────kube-linter (broad checks) Falco (runtime — add when threat model requires it)trivy config (secrets, CIS)kubescape (compliance score)Kyverno CLI (custom policies)cosign verify (image signatures)All five tools run in CI. The manifest linters analyze YAML on disk — no cluster required. Cosign verifies image signatures against the registry. Flux ensures that only merged, linted manifests reach the cluster.
The fastest and broadest linter. It checks for missing security contexts, absent resource limits, latest image tags, host namespace usage, missing network policies, and pods on the default service account. It runs in milliseconds against any directory of manifests or piped Kustomize output.
Covers a different rule set. It catches embedded secrets, Dockerfile misconfigurations, and CIS benchmark violations in Kubernetes manifests, Terraform files, and Helm charts. Run it alongside kube-linter — the overlap is small enough that both add value.
Maps manifests against published compliance frameworks: the NSA-CISA Kubernetes Hardening Guide, MITRE ATT&CK for Containers, and the CIS Kubernetes Benchmark. It produces a risk score from 0 to 100. The score gives you a single number to track improvement over time. More verbose than kube-linter, so it works better as a periodic audit than a CI gate.
Kyverno is better known as an in-cluster admission controller, but its CLI works as a standalone linter. Write policies as declarative YAML — the same format used for in-cluster enforcement — and apply them to manifests locally.
This fills the gap the other linters leave: project-specific rules. kube-linter, trivy, and kubescape check against generic best practices. Kyverno CLI lets you enforce your own standards — naming conventions, required labels, allowed registries, or anything else expressible as a Kyverno policy.
Two commands:
# Apply a single policy to a resourcekyverno apply policy.yaml --resource manifest.yaml
# Run a test suite with expected results (for CI)kyverno test .kyverno test reads a kyverno-test.yaml file that declares which policies to apply, which resources to test, and what the expected outcome is (pass or fail). This makes policy validation deterministic and CI-friendly.
apiVersion: cli.kyverno.io/v1alpha1kind: Testmetadata: name: security-checkspolicies: - policies/require-non-root.yaml - policies/require-resource-limits.yamlresources: - manifests/deployment.yamlresults: - policy: require-run-as-non-root rule: run-as-non-root resources: - myapp kind: Pod result: passkube-linter is already available through mise. For the others:
mise use aqua:aquasecurity/trivymise use aqua:kubescape/kubescapemise use aqua:kyverno/kyvernomise use aqua:sigstore/cosign# kube-linter — lint raw manifests or Kustomize outputkube-linter lint ./manifests/kustomize build infrastructure/base | kube-linter lint -
# trivy — scan for misconfigs and embedded secretstrivy config ./manifests/trivy config --severity HIGH,CRITICAL ./manifests/
# kubescape — compliance scan against NSA-CISA frameworkkubescape scan ./manifests/kubescape scan framework nsa ./manifests/
# Kyverno CLI — apply custom policieskyverno apply policies/ --resource manifests/kyverno test .
# cosign — verify image signatures (keyless, GitHub Actions identity)cosign verify ghcr.io/myorg/myapp:latest \ --certificate-identity=https://github.com/myorg/myapp/.github/workflows/build.yml@refs/heads/main \ --certificate-oidc-issuer=https://token.actions.githubusercontent.comKyverno can run as a Kubernetes admission controller that blocks bad resources at the API server. We are not using it in-cluster for two reasons:
Cosign image verification in CI closes the one gap that linters leave open, without putting anything in the deployment path that can break.
If you do decide to deploy Kyverno in-cluster later, the kyverno-policies Helm chart ships a curated set of policies implementing the Kubernetes Pod Security Standards — no hand-written ClusterPolicy manifests needed:
helm repo add kyverno https://kyverno.github.io/kyverno/helm repo updatehelm install kyverno kyverno/kyverno -n kyverno --create-namespacehelm install kyverno-policies kyverno/kyverno-policies \ -n kyverno --set policyGroups=pod-securityThe policyGroups=pod-security value scopes the install to Pod Security Standard policies. By default they run in Audit mode (log violations but do not block). To enforce:
helm install kyverno-policies kyverno/kyverno-policies -n kyverno \ --set policyGroups=pod-security \ --set validationFailureAction=EnforceThis gives you the essential policies (non-root, resource limits, read-only root filesystem, no privileged containers, required labels) and more, with minimal configuration. The same policy definitions can be used locally via kyverno apply with the CLI.
Falco uses eBPF to observe system calls inside containers. It detects shells spawned in containers, reads of /etc/shadow, unexpected outbound connections, privilege escalation attempts, and crypto mining indicators. It requires a Linux kernel ≥ 5.8 and ongoing rule tuning. The default ruleset generates false positives — expect to spend time silencing noise before it becomes useful. Add it when your threat model requires runtime detection.
See Security Linting and Policy Enforcement for full details on both, including Kyverno ClusterPolicy manifests and Falco installation.
Start here
Run kube-linter on your manifests. Already installed, catches the most common problems in under a second. Add to CI.
Add coverage
Install trivy and kubescape. Ten minutes of setup. Trivy adds secret detection and Dockerfile scanning. Kubescape adds compliance scoring.
Custom policies
Write Kyverno CLI policies for project-specific rules. Same YAML format used for in-cluster enforcement, so policies are reusable if you later deploy Kyverno to the cluster.
Close the real gap
Sign images with cosign in CI and verify before deploy. This is the one thing manifest linting cannot cover — and it stays in the CI pipeline, not in the cluster’s critical path.
When needed
Add Falco for runtime threat detection. Hardest to tune and maintain, but catches threats no static tool can see.