Configuring RBAC
Kubernetes RBAC has three moving parts: a ClusterRole that lists allowed verbs and resources, a subject (User, Group, or ServiceAccount) that holds an identity, and a RoleBinding that connects the two within a namespace. This project defines ClusterRoles once at the cluster level and reuses them per namespace through overlays.
Where RBAC files live
Section titled “Where RBAC files live”RBAC resources belong in infrastructure/, not in Helm templates/. The rule is: if running helm uninstall would break something outside the app, that resource is not part of the app lifecycle.
ClusterRoles and group RoleBindings are organizational policy. They survive chart uninstalls, are shared across namespaces, and have no dependency on a specific deployment. ServiceAccounts that give pods an identity do belong in Helm templates — they are created and deleted with the chart.
Role hierarchy
Section titled “Role hierarchy”Four roles cover the access needs in this project:
| Role | Based on | Who |
|---|---|---|
namespace-admin | Aggregated ClusterRole | Co-founders, lead engineers |
edit | Built-in | Team members |
view | Built-in | Contractors |
frontend-developer | Custom ClusterRole | Frontend developers |
namespace-admin is the highest namespace-scoped tier. It uses an aggregation rule that selects any ClusterRole labeled rbac.authorization.k8s.io/aggregate-to-admin: "true", so CRD controllers automatically extend it when they install new resource types. It grants full management of namespace-scoped resources, including Roles and RoleBindings. Kubernetes prevents privilege escalation — a namespace-admin cannot create bindings for permissions they do not already hold.
edit (built-in) grants create, update, and delete access to most workload resources. It does not include Roles or RoleBindings, so team members can manage deployments without touching namespace RBAC.
view (built-in) is read-only: get, list, and watch on most namespace-scoped resources. Note that the built-in view role includes Secrets. If contractors should not read Secret values, replace this binding with a custom ClusterRole that omits Secrets.
frontend-developer is a custom role for developers who should not touch backend infrastructure. It grants full CRUD on Deployments, ReplicaSets, and ConfigMaps, plus update/patch on Services. Pods and pod logs are read-only. It excludes Secrets, PersistentVolumeClaims, ServiceAccounts, and network resources that the built-in edit role would otherwise permit.
Group-based bindings
Section titled “Group-based bindings”Bind to groups, not individual users. One RoleBinding per group covers everyone in it. Promotion and demotion means moving a person between groups in your identity provider — not editing YAML.
Kubernetes groups are implicit: you do not create a group resource. A group exists the moment a RoleBinding references it. Group membership is asserted by the authentication layer (certificate O field or OIDC claim) at request time.
The four bindings in this project:
team-admins → namespace-admin
subjects: - kind: Group name: team-admins apiGroup: rbac.authorization.k8s.ioroleRef: kind: ClusterRole name: namespace-adminteam-members → edit
subjects: - kind: Group name: team-members apiGroup: rbac.authorization.k8s.ioroleRef: kind: ClusterRole name: editcontractors → view
subjects: - kind: Group name: contractors apiGroup: rbac.authorization.k8s.ioroleRef: kind: ClusterRole name: viewThe binding names follow the pattern <group>-<role> — for example, team-admins-namespace-admin reads as “team-admins get namespace-admin.”
Per-namespace scoping via overlays
Section titled “Per-namespace scoping via overlays”The base RoleBinding manifests omit the namespace field. A Kustomize overlay supplies it for each environment:
namespace: productionresources: - ../../base/1-rbacThis means the same base files work across dev, staging, and production without duplication.
Generating scaffolding
Section titled “Generating scaffolding”Use kubectl dry-run to generate initial YAML for any binding:
kubectl create rolebinding team-admins-namespace-admin \ --clusterrole=namespace-admin \ --group=team-admins \ --namespace=production \ --dry-run=client -o yamlFor the frontend-developer ClusterRole:
kubectl create clusterrole frontend-developer \ --verb=get,list,watch,create,update,patch,delete \ --resource=deployments,replicasets,configmaps,services \ --dry-run=client -o yamlDry-run output gives you a correct starting point. Extend rules by hand for resources the generator does not cover (such as pods/log).