Skip to content

Security Linting and Policy Enforcement

Kubernetes accepts almost any valid YAML. A deployment with no resource limits, running as root, with a mutable filesystem will schedule and run without complaint. Security problems surface later — in production incidents, compliance audits, or breach postmortems.

This tutorial walks through four layers of defense, from static analysis on your laptop to runtime anomaly detection in the cluster. Each layer catches problems the previous one cannot.

Development Deployment Runtime
─────────── ────────── ───────
kube-linter Kyverno (Audit) Falco
trivy config Kyverno (Enforce)
kubescape

Start with the linters. They need no cluster and run in seconds. Add admission control once you have manifests passing lint. Add runtime monitoring last.

These tools analyze YAML files on disk. No cluster required. Run them locally during development and in CI before merge.

kube-linter checks manifests against common security and configuration mistakes. It runs in milliseconds and catches the most frequent problems: missing security contexts, absent resource limits, latest image tags, use of host namespaces, missing network policies, and pods running on the default service account.

Terminal window
kube-linter lint ./manifests/

To lint Kustomize output piped from stdin:

Terminal window
kustomize build infrastructure/base | kube-linter lint -

Configure checks in .kube-linter.yaml at the repository root:

checks:
addAllBuiltIn: true
exclude:
- "dangling-service"

addAllBuiltIn: true enables every built-in check. Exclude individual checks by name when you have a documented reason to skip them.

Trivy scans for misconfigurations and embedded secrets in Kubernetes manifests, Dockerfiles, Terraform files, and Helm charts.

Terminal window
trivy config ./manifests/

Filter to high and critical severity findings:

Terminal window
trivy config --severity HIGH,CRITICAL ./manifests/

Trivy also scans container images for CVEs, but the config subcommand focuses on manifest structure. Use it alongside kube-linter — they check different rule sets.

Kubescape maps your manifests against published compliance frameworks: NSA-CISA Kubernetes Hardening Guide, MITRE ATT&CK for Containers, and the CIS Kubernetes Benchmark. It produces a risk score from 0 to 100.

Scan with the default framework:

Terminal window
kubescape scan ./manifests/

Scan against the NSA-CISA framework:

Terminal window
kubescape scan framework nsa ./manifests/

The output groups findings by control, shows which resources failed, and explains what to fix. The risk score gives you a single number to track improvement over time.

Polaris overlaps with kube-linter but adds a web dashboard for browsing results interactively.

Audit from the command line:

Terminal window
polaris audit --audit-path ./manifests/

Run the dashboard locally:

Terminal window
polaris dashboard --port 8080

Open http://localhost:8080 to browse findings by category and severity. Polaris is optional if you already run kube-linter, but the dashboard is useful for teams reviewing results together.

Static analysis catches structural problems. Auditing a running cluster catches drift — configurations that passed lint but were modified after deployment, or cluster-level settings the manifest tools cannot see.

Point kubescape at your cluster instead of a file path:

Terminal window
kubescape scan
kubescape scan framework nsa
kubescape scan framework cis-v1.23-t1.0.1

Without a file path argument, kubescape connects to the current kubectl context and scans every resource in the cluster.

Trivy’s kubernetes subcommand scans the running cluster for misconfigurations, vulnerabilities, and compliance violations:

Terminal window
trivy kubernetes --report summary

Check CIS benchmark compliance:

Terminal window
trivy kubernetes --compliance k8s-cis-1.23 --report summary

Scan only for misconfiguration (skip image vulnerabilities for speed):

Terminal window
trivy kubernetes --scanners misconfig --report summary

Linters warn. Kyverno blocks. It runs as a Kubernetes admission controller — every resource creation or update passes through Kyverno before reaching etcd. Policies that fail validation reject the API request.

Terminal window
helm repo add kyverno https://kyverno.github.io/kyverno/
helm install kyverno kyverno/kyverno -n kyverno --create-namespace

See the Kyverno documentation for configuration options and upgrades.

Every Kyverno policy sets a validationFailureAction:

  • Audit — violations are logged in policy reports but not blocked. Resources still create and update normally. Start here.
  • Enforce — violations reject the API request. The resource does not get created or updated.

Begin with Audit on every policy. Review the policy reports. Once you confirm the policies match your expectations and existing workloads comply, switch to Enforce.

These five policies cover the most common security gaps. Each is a ClusterPolicy that applies cluster-wide.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-run-as-non-root
spec:
validationFailureAction: Audit
rules:
- name: run-as-non-root
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Running as root is not allowed."
pattern:
spec:
securityContext:
runAsNonRoot: true
containers:
- securityContext:
runAsNonRoot: true
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-resource-limits
spec:
validationFailureAction: Audit
rules:
- name: check-limits
match:
any:
- resources:
kinds:
- Pod
validate:
message: "CPU and memory limits are required."
pattern:
spec:
containers:
- resources:
limits:
memory: "?*"
cpu: "?*"
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-read-only-root
spec:
validationFailureAction: Audit
rules:
- name: read-only-root
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Root filesystem must be read-only."
pattern:
spec:
containers:
- securityContext:
readOnlyRootFilesystem: true
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-privileged
spec:
validationFailureAction: Audit
rules:
- name: no-privileged
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Privileged containers are not allowed."
pattern:
spec:
containers:
- securityContext:
privileged: "!true"
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-app-label
spec:
validationFailureAction: Audit
rules:
- name: check-label
match:
any:
- resources:
kinds:
- Pod
validate:
message: "The label 'app.kubernetes.io/name' is required."
pattern:
metadata:
labels:
app.kubernetes.io/name: "?*"

Policy reports accumulate in each namespace. Cluster-scoped resources get a separate cluster report.

Terminal window
kubectl get policyreport -A
kubectl get clusterpolicyreport

Describe a report to see individual violations with messages:

Terminal window
kubectl describe policyreport -n default

Use a server-side dry run to check whether Kyverno would reject a resource without actually creating it:

Terminal window
kubectl apply --dry-run=server -f pod.yaml

If a policy in Enforce mode would block the resource, the dry run returns the rejection message. This gives you fast feedback during development.

Linters check files. Kyverno checks API requests. Falco watches what containers actually do at runtime. It uses eBPF to observe system calls and detects anomalous behavior: shells spawned inside containers, reads of sensitive files like /etc/shadow, unexpected outbound network connections, privilege escalation attempts, and crypto mining indicators.

Terminal window
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm install falco falcosecurity/falco \
-n falco --create-namespace \
--set driver.kind=modern_ebpf

The modern_ebpf driver requires a Linux kernel 5.8 or later. See the Falco documentation for alternative driver options.

Falco writes alerts to stdout in the pod logs:

Terminal window
kubectl logs -n falco -l app.kubernetes.io/name=falco --tail=100

Each alert includes a timestamp, severity, rule name, and details about the syscall that triggered it. In production, forward these logs to your monitoring stack for alerting and dashboards.

The base layer should be locked down. Overlays loosen restrictions where needed. This way, forgetting to add a policy means you get the more secure default, not the less secure one.

Every base NetworkPolicy must include policyTypes: [Ingress, Egress] with explicit egress rules. Without an egress policy, a compromised pod can make arbitrary outbound connections — data exfiltration, reverse shells, crypto mining callbacks.

The minimum egress rule allows DNS resolution only:

# DEFAULT-DENY EGRESS: Only DNS to kube-system is allowed.
# A compromised pod cannot make arbitrary outbound connections.
# If your app needs additional egress (e.g. to an API), add rules
# in the overlay — do NOT remove the default-deny here.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myapp
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Ingress
- Egress
ingress:
- ports:
- port: 8080
protocol: TCP
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP

If your app needs to call an external API, add a second egress rule in the overlay — do not widen the base policy.

Floating tags like caddy:alpine can be swapped by a supply chain attack on the registry. Pin production images to a digest:

image: docker.io/library/caddy:alpine@sha256:a1b7e624f...

Dev overlays can override to a floating tag for quick iteration, but the base should always be pinned.

Set seccompProfile.type: RuntimeDefault in the pod-level securityContext. This applies the container runtime’s default syscall filter, blocking dangerous syscalls like ptrace and mount. Without it, containers run with the Unconfined profile.

spec:
template:
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
  • runAsNonRoot: true (pod and container level)
  • readOnlyRootFilesystem: true
  • allowPrivilegeEscalation: false
  • capabilities.drop: [ALL]
  • seccompProfile.type: RuntimeDefault
  • Dedicated ServiceAccount with automountServiceAccountToken: false
  • NetworkPolicy with default-deny egress (DNS only)
  • Image pinned by digest with full registry path
  • Resource limits set

Overlays can relax any of these — but they must do so explicitly.

  1. Add kube-linter to CI. Run it on every pull request. It catches 80% of common security mistakes in under a second.
  2. Run kubescape periodically for compliance scoring. Track the risk score over time. Use the NSA-CISA framework as a starting baseline.
  3. Install Kyverno in Audit mode. Apply the five policies above. Review policy reports weekly. Fix violations in your manifests.
  4. Switch Kyverno to Enforce once existing workloads pass all policies. New deployments that violate policy will be rejected at the API server.
  5. Add Falco for runtime anomaly detection. Start with the default ruleset. Tune out false positives for your specific workloads.

Each layer reinforces the others. Linters prevent known-bad patterns from entering the cluster. Kyverno enforces organizational policy at the gate. Falco catches runtime behavior that no static analysis can predict.