Security Linting and Policy Enforcement
Security Linting and Policy Enforcement
Section titled “Security Linting and Policy Enforcement”Kubernetes accepts almost any valid YAML. A deployment with no resource limits, running as root, with a mutable filesystem will schedule and run without complaint. Security problems surface later — in production incidents, compliance audits, or breach postmortems.
This tutorial walks through four layers of defense, from static analysis on your laptop to runtime anomaly detection in the cluster. Each layer catches problems the previous one cannot.
Development Deployment Runtime─────────── ────────── ───────kube-linter Kyverno (Audit) Falcotrivy config Kyverno (Enforce)kubescapeStart with the linters. They need no cluster and run in seconds. Add admission control once you have manifests passing lint. Add runtime monitoring last.
Layer 1: Lint Before Deploy
Section titled “Layer 1: Lint Before Deploy”These tools analyze YAML files on disk. No cluster required. Run them locally during development and in CI before merge.
kube-linter
Section titled “kube-linter”kube-linter checks manifests against common security and configuration mistakes. It runs in milliseconds and catches the most frequent problems: missing security contexts, absent resource limits, latest image tags, use of host namespaces, missing network policies, and pods running on the default service account.
kube-linter lint ./manifests/To lint Kustomize output piped from stdin:
kustomize build infrastructure/base | kube-linter lint -Configure checks in .kube-linter.yaml at the repository root:
checks: addAllBuiltIn: true exclude: - "dangling-service"addAllBuiltIn: true enables every built-in check. Exclude individual checks by name when you have a documented reason to skip them.
Trivy scans for misconfigurations and embedded secrets in Kubernetes manifests, Dockerfiles, Terraform files, and Helm charts.
trivy config ./manifests/Filter to high and critical severity findings:
trivy config --severity HIGH,CRITICAL ./manifests/Trivy also scans container images for CVEs, but the config subcommand focuses on manifest structure. Use it alongside kube-linter — they check different rule sets.
kubescape
Section titled “kubescape”Kubescape maps your manifests against published compliance frameworks: NSA-CISA Kubernetes Hardening Guide, MITRE ATT&CK for Containers, and the CIS Kubernetes Benchmark. It produces a risk score from 0 to 100.
Scan with the default framework:
kubescape scan ./manifests/Scan against the NSA-CISA framework:
kubescape scan framework nsa ./manifests/The output groups findings by control, shows which resources failed, and explains what to fix. The risk score gives you a single number to track improvement over time.
Polaris
Section titled “Polaris”Polaris overlaps with kube-linter but adds a web dashboard for browsing results interactively.
Audit from the command line:
polaris audit --audit-path ./manifests/Run the dashboard locally:
polaris dashboard --port 8080Open http://localhost:8080 to browse findings by category and severity. Polaris is optional if you already run kube-linter, but the dashboard is useful for teams reviewing results together.
Layer 2: Audit a Live Cluster
Section titled “Layer 2: Audit a Live Cluster”Static analysis catches structural problems. Auditing a running cluster catches drift — configurations that passed lint but were modified after deployment, or cluster-level settings the manifest tools cannot see.
kubescape on a live cluster
Section titled “kubescape on a live cluster”Point kubescape at your cluster instead of a file path:
kubescape scankubescape scan framework nsakubescape scan framework cis-v1.23-t1.0.1Without a file path argument, kubescape connects to the current kubectl context and scans every resource in the cluster.
trivy on a live cluster
Section titled “trivy on a live cluster”Trivy’s kubernetes subcommand scans the running cluster for misconfigurations, vulnerabilities, and compliance violations:
trivy kubernetes --report summaryCheck CIS benchmark compliance:
trivy kubernetes --compliance k8s-cis-1.23 --report summaryScan only for misconfiguration (skip image vulnerabilities for speed):
trivy kubernetes --scanners misconfig --report summaryLayer 3: Kyverno Policy Enforcement
Section titled “Layer 3: Kyverno Policy Enforcement”Linters warn. Kyverno blocks. It runs as a Kubernetes admission controller — every resource creation or update passes through Kyverno before reaching etcd. Policies that fail validation reject the API request.
Install Kyverno
Section titled “Install Kyverno”helm repo add kyverno https://kyverno.github.io/kyverno/helm install kyverno kyverno/kyverno -n kyverno --create-namespaceSee the Kyverno documentation for configuration options and upgrades.
Audit vs Enforce
Section titled “Audit vs Enforce”Every Kyverno policy sets a validationFailureAction:
- Audit — violations are logged in policy reports but not blocked. Resources still create and update normally. Start here.
- Enforce — violations reject the API request. The resource does not get created or updated.
Begin with Audit on every policy. Review the policy reports. Once you confirm the policies match your expectations and existing workloads comply, switch to Enforce.
Essential policies
Section titled “Essential policies”These five policies cover the most common security gaps. Each is a ClusterPolicy that applies cluster-wide.
Require non-root containers
Section titled “Require non-root containers”apiVersion: kyverno.io/v1kind: ClusterPolicymetadata: name: require-run-as-non-rootspec: validationFailureAction: Audit rules: - name: run-as-non-root match: any: - resources: kinds: - Pod validate: message: "Running as root is not allowed." pattern: spec: securityContext: runAsNonRoot: true containers: - securityContext: runAsNonRoot: trueRequire resource limits
Section titled “Require resource limits”apiVersion: kyverno.io/v1kind: ClusterPolicymetadata: name: require-resource-limitsspec: validationFailureAction: Audit rules: - name: check-limits match: any: - resources: kinds: - Pod validate: message: "CPU and memory limits are required." pattern: spec: containers: - resources: limits: memory: "?*" cpu: "?*"Require read-only root filesystem
Section titled “Require read-only root filesystem”apiVersion: kyverno.io/v1kind: ClusterPolicymetadata: name: require-read-only-rootspec: validationFailureAction: Audit rules: - name: read-only-root match: any: - resources: kinds: - Pod validate: message: "Root filesystem must be read-only." pattern: spec: containers: - securityContext: readOnlyRootFilesystem: trueDisallow privileged containers
Section titled “Disallow privileged containers”apiVersion: kyverno.io/v1kind: ClusterPolicymetadata: name: disallow-privilegedspec: validationFailureAction: Audit rules: - name: no-privileged match: any: - resources: kinds: - Pod validate: message: "Privileged containers are not allowed." pattern: spec: containers: - securityContext: privileged: "!true"Require labels
Section titled “Require labels”apiVersion: kyverno.io/v1kind: ClusterPolicymetadata: name: require-app-labelspec: validationFailureAction: Audit rules: - name: check-label match: any: - resources: kinds: - Pod validate: message: "The label 'app.kubernetes.io/name' is required." pattern: metadata: labels: app.kubernetes.io/name: "?*"Viewing violations
Section titled “Viewing violations”Policy reports accumulate in each namespace. Cluster-scoped resources get a separate cluster report.
kubectl get policyreport -Akubectl get clusterpolicyreportDescribe a report to see individual violations with messages:
kubectl describe policyreport -n defaultTesting policies before deploy
Section titled “Testing policies before deploy”Use a server-side dry run to check whether Kyverno would reject a resource without actually creating it:
kubectl apply --dry-run=server -f pod.yamlIf a policy in Enforce mode would block the resource, the dry run returns the rejection message. This gives you fast feedback during development.
Layer 4: Falco Runtime Monitoring
Section titled “Layer 4: Falco Runtime Monitoring”Linters check files. Kyverno checks API requests. Falco watches what containers actually do at runtime. It uses eBPF to observe system calls and detects anomalous behavior: shells spawned inside containers, reads of sensitive files like /etc/shadow, unexpected outbound network connections, privilege escalation attempts, and crypto mining indicators.
Install Falco
Section titled “Install Falco”helm repo add falcosecurity https://falcosecurity.github.io/chartshelm install falco falcosecurity/falco \ -n falco --create-namespace \ --set driver.kind=modern_ebpfThe modern_ebpf driver requires a Linux kernel 5.8 or later. See the Falco documentation for alternative driver options.
Checking alerts
Section titled “Checking alerts”Falco writes alerts to stdout in the pod logs:
kubectl logs -n falco -l app.kubernetes.io/name=falco --tail=100Each alert includes a timestamp, severity, rule name, and details about the syscall that triggered it. In production, forward these logs to your monitoring stack for alerting and dashboards.
Secure-by-default base manifests
Section titled “Secure-by-default base manifests”The base layer should be locked down. Overlays loosen restrictions where needed. This way, forgetting to add a policy means you get the more secure default, not the less secure one.
Default-deny egress NetworkPolicy
Section titled “Default-deny egress NetworkPolicy”Every base NetworkPolicy must include policyTypes: [Ingress, Egress] with explicit egress rules. Without an egress policy, a compromised pod can make arbitrary outbound connections — data exfiltration, reverse shells, crypto mining callbacks.
The minimum egress rule allows DNS resolution only:
# DEFAULT-DENY EGRESS: Only DNS to kube-system is allowed.# A compromised pod cannot make arbitrary outbound connections.# If your app needs additional egress (e.g. to an API), add rules# in the overlay — do NOT remove the default-deny here.apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: myappspec: podSelector: matchLabels: app: myapp policyTypes: - Ingress - Egress ingress: - ports: - port: 8080 protocol: TCP egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system ports: - port: 53 protocol: UDP - port: 53 protocol: TCPIf your app needs to call an external API, add a second egress rule in the overlay — do not widen the base policy.
Pin container images by digest
Section titled “Pin container images by digest”Floating tags like caddy:alpine can be swapped by a supply chain attack on the registry. Pin production images to a digest:
image: docker.io/library/caddy:alpine@sha256:a1b7e624f...Dev overlays can override to a floating tag for quick iteration, but the base should always be pinned.
seccompProfile: RuntimeDefault
Section titled “seccompProfile: RuntimeDefault”Set seccompProfile.type: RuntimeDefault in the pod-level securityContext. This applies the container runtime’s default syscall filter, blocking dangerous syscalls like ptrace and mount. Without it, containers run with the Unconfined profile.
spec: template: spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefaultChecklist for every base deployment
Section titled “Checklist for every base deployment”runAsNonRoot: true(pod and container level)readOnlyRootFilesystem: trueallowPrivilegeEscalation: falsecapabilities.drop: [ALL]seccompProfile.type: RuntimeDefault- Dedicated ServiceAccount with
automountServiceAccountToken: false - NetworkPolicy with default-deny egress (DNS only)
- Image pinned by digest with full registry path
- Resource limits set
Overlays can relax any of these — but they must do so explicitly.
Recommended workflow
Section titled “Recommended workflow”- Add kube-linter to CI. Run it on every pull request. It catches 80% of common security mistakes in under a second.
- Run kubescape periodically for compliance scoring. Track the risk score over time. Use the NSA-CISA framework as a starting baseline.
- Install Kyverno in Audit mode. Apply the five policies above. Review policy reports weekly. Fix violations in your manifests.
- Switch Kyverno to Enforce once existing workloads pass all policies. New deployments that violate policy will be rejected at the API server.
- Add Falco for runtime anomaly detection. Start with the default ruleset. Tune out false positives for your specific workloads.
Each layer reinforces the others. Linters prevent known-bad patterns from entering the cluster. Kyverno enforces organizational policy at the gate. Falco catches runtime behavior that no static analysis can predict.
Next steps
Section titled “Next steps”- Production Hardening covers RBAC and network policies that complement these tools
- Full Stack HA has the manifests you can practice linting
- Flux CD explains where to integrate linting in your GitOps pipeline
- Development Workflow shows where linting fits in the development cycle