Skip to content

Configuring RBAC

Kubernetes RBAC has three moving parts: a ClusterRole that lists allowed verbs and resources, a subject (User, Group, or ServiceAccount) that holds an identity, and a RoleBinding that connects the two within a namespace. This project defines ClusterRoles once at the cluster level and reuses them per namespace through overlays.

RBAC resources belong in infrastructure/, not in Helm templates/. The rule is: if running helm uninstall would break something outside the app, that resource is not part of the app lifecycle.

ClusterRoles and group RoleBindings are organizational policy. They survive chart uninstalls, are shared across namespaces, and have no dependency on a specific deployment. ServiceAccounts that give pods an identity do belong in Helm templates — they are created and deleted with the chart.

Four roles cover the access needs in this project:

RoleBased onWho
namespace-adminAggregated ClusterRoleCo-founders, lead engineers
editBuilt-inTeam members
viewBuilt-inContractors
frontend-developerCustom ClusterRoleFrontend developers

namespace-admin is the highest namespace-scoped tier. It uses an aggregation rule that selects any ClusterRole labeled rbac.authorization.k8s.io/aggregate-to-admin: "true", so CRD controllers automatically extend it when they install new resource types. It grants full management of namespace-scoped resources, including Roles and RoleBindings. Kubernetes prevents privilege escalation — a namespace-admin cannot create bindings for permissions they do not already hold.

edit (built-in) grants create, update, and delete access to most workload resources. It does not include Roles or RoleBindings, so team members can manage deployments without touching namespace RBAC.

view (built-in) is read-only: get, list, and watch on most namespace-scoped resources. Note that the built-in view role includes Secrets. If contractors should not read Secret values, replace this binding with a custom ClusterRole that omits Secrets.

frontend-developer is a custom role for developers who should not touch backend infrastructure. It grants full CRUD on Deployments, ReplicaSets, and ConfigMaps, plus update/patch on Services. Pods and pod logs are read-only. It excludes Secrets, PersistentVolumeClaims, ServiceAccounts, and network resources that the built-in edit role would otherwise permit.

Bind to groups, not individual users. One RoleBinding per group covers everyone in it. Promotion and demotion means moving a person between groups in your identity provider — not editing YAML.

Kubernetes groups are implicit: you do not create a group resource. A group exists the moment a RoleBinding references it. Group membership is asserted by the authentication layer (certificate O field or OIDC claim) at request time.

The four bindings in this project:

team-adminsnamespace-admin

subjects:
- kind: Group
name: team-admins
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: namespace-admin

team-membersedit

subjects:
- kind: Group
name: team-members
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: edit

contractorsview

subjects:
- kind: Group
name: contractors
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: view

The binding names follow the pattern <group>-<role> — for example, team-admins-namespace-admin reads as “team-admins get namespace-admin.”

The base RoleBinding manifests omit the namespace field. A Kustomize overlay supplies it for each environment:

overlays/production/kustomization.yaml
namespace: production
resources:
- ../../base/1-rbac

This means the same base files work across dev, staging, and production without duplication.

Use kubectl dry-run to generate initial YAML for any binding:

Terminal window
kubectl create rolebinding team-admins-namespace-admin \
--clusterrole=namespace-admin \
--group=team-admins \
--namespace=production \
--dry-run=client -o yaml

For the frontend-developer ClusterRole:

Terminal window
kubectl create clusterrole frontend-developer \
--verb=get,list,watch,create,update,patch,delete \
--resource=deployments,replicasets,configmaps,services \
--dry-run=client -o yaml

Dry-run output gives you a correct starting point. Extend rules by hand for resources the generator does not cover (such as pods/log).