Table of contents
- What RBAC does (and does not do)
- The four RBAC objects
- Role vs. ClusterRole
- RoleBinding vs. ClusterRoleBinding
- Service accounts for workloads
- Aggregated ClusterRoles
- Debugging with kubectl auth can-i
- Least-privilege patterns for CI/CD
- Common mistakes and security pitfalls
- When to escalate
What RBAC does (and does not do)
RBAC is Kubernetes' authorization layer. It decides whether an authenticated identity (a user, a group, or a service account) is allowed to perform a specific action on a specific resource. Access is denied by default: if no RBAC rule explicitly grants a permission, the API server returns 403 Forbidden.
RBAC does not handle authentication. By the time an RBAC check runs, Kubernetes already knows who is calling. How that identity was established (client certificate, OIDC token, cloud IAM) is a separate concern.
Two properties shape every decision you make with RBAC:
- Permissions are purely additive. There are no deny rules. You grant access; you cannot explicitly revoke a specific permission that was granted elsewhere. The official documentation states this directly: "Permissions are purely additive (there are no 'deny' rules)."
- RBAC is enabled by default in every cluster created by
kubeadmand by every major managed provider (GKE, EKS, AKS). You are already using it.
What RBAC does not control: network traffic between pods. A compromised pod with zero RBAC permissions can still reach every service in the cluster unless you enforce network policies.
The four RBAC objects
RBAC uses four Kubernetes objects from the rbac.authorization.k8s.io/v1 API group:
| Object | Scope | Purpose |
|---|---|---|
| Role | Namespace | Defines a set of permissions within one namespace |
| ClusterRole | Cluster | Defines a set of permissions cluster-wide or for non-namespaced resources |
| RoleBinding | Namespace | Grants a Role or ClusterRole to subjects within one namespace |
| ClusterRoleBinding | Cluster | Grants a ClusterRole to subjects across all namespaces |
Subjects are the identities that receive permissions through bindings. Three kinds exist:
User(an external identity; Kubernetes does not manage user accounts)Group(a set of users; membership is set by the authentication layer)ServiceAccount(a namespaced, non-human identity managed by Kubernetes)
Every RBAC rule is a combination of API groups, resources, verbs, and optionally resourceNames. The empty string "" as an API group means the core group (pods, services, secrets, configmaps).
Role vs. ClusterRole
A Role sets permissions within a single namespace. You specify the namespace when you create it:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: production
rules:
- apiGroups: [""] # core API group
resources: ["pods"]
verbs: ["get", "list", "watch"]
A ClusterRole is not namespaced. Use it for:
- Permissions on non-namespaced resources (nodes, persistent volumes, namespaces themselves)
- Permissions on non-resource URLs (
/healthz,/metrics,/readyz) - A reusable permission template that multiple namespaces share through RoleBindings
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-viewer
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/healthz", "/readyz"] # only valid in ClusterRoles
verbs: ["get"]
Default rule: prefer Role over ClusterRole when permissions are namespace-scoped. A ClusterRole is necessary only when the resource is non-namespaced, when you need nonResourceURLs, or when you want to define permissions once and bind them in multiple namespaces.
Subresources require separate grants
get on pods does not give access to pods/exec, pods/log, or pods/portforward. Each subresource needs its own rule. The verb for exec and port-forward is create, not get:
rules:
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"] # exec uses the create verb
RoleBinding vs. ClusterRoleBinding
A RoleBinding grants permissions within a specific namespace. A ClusterRoleBinding grants them cluster-wide.
The most commonly misunderstood pattern: a RoleBinding can reference a ClusterRole. When it does, the permissions are scoped to the RoleBinding's namespace, not cluster-wide. This is how Kubernetes intends you to use the four built-in ClusterRoles (view, edit, admin, cluster-admin) without duplicating their definitions in every namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-team-view
namespace: staging
subjects:
- kind: Group
name: dev-team
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole # references a ClusterRole, but scoped to staging
name: view
apiGroup: rbac.authorization.k8s.io
This gives the dev-team group read-only access in staging only. They cannot see resources in other namespaces.
Built-in ClusterRoles
Kubernetes ships four ClusterRoles designed for namespace-level use through RoleBindings:
| ClusterRole | What it grants | What it excludes |
|---|---|---|
view |
Read-only access to most resources | Secrets, Role, RoleBinding |
edit |
Full CRUD on workload resources | Role, RoleBinding management |
admin |
Full CRUD including Role/RoleBinding | ClusterRole, ClusterRoleBinding |
cluster-admin |
Unrestricted access | Nothing |
Use view and edit through RoleBindings. Reserve cluster-admin for cluster operators, and bind it through ClusterRoleBindings only when the operator genuinely needs cluster-wide access.
Service accounts for workloads
Every pod runs as a service account. If you do not specify serviceAccountName in your pod spec, the pod runs as default in its namespace. The default service account has no useful RBAC permissions, but its token is still mounted inside the pod.
Token lifecycle (Kubernetes 1.22+)
Since Kubernetes 1.22, pods receive a short-lived, automatically rotating projected token via the TokenRequest API. This replaced the older mechanism (pre-1.24) where a static, non-expiring Secret-based token was automatically created for each service account.
Since Kubernetes 1.24, Kubernetes no longer creates long-lived token Secrets automatically. If your pipeline or tooling still depends on a permanent token in a Secret, you need to explicitly create one, and you should migrate away from that pattern.
Best practices
- Create a dedicated service account per application. Not per team. Not per namespace. Per application.
- Set
automountServiceAccountToken: falseon service accounts that do not need API access. An attacker who can exec into a pod can read the token from/var/run/secrets/kubernetes.io/serviceaccount/token. Disabling the mount removes that attack path.
apiVersion: v1
kind: ServiceAccount
metadata:
name: frontend
namespace: production
automountServiceAccountToken: false
- Scope Roles to exactly the resources and verbs the application calls. Use
audit2rbacto generate a minimal RBAC policy from Kubernetes audit logs. Run the workload with audit logging enabled, then feed the logs toaudit2rbacto see exactly what the application requested. - Cross-namespace access is possible. A RoleBinding in namespace B can grant a service account from namespace A access to namespace B's resources. This is intentional, but it means you need to audit bindings across namespaces, not just within them.
Aggregated ClusterRoles
An aggregated ClusterRole uses aggregationRule.clusterRoleSelectors to automatically compose permissions from other ClusterRoles that match a label selector. The control plane merges those rules into the aggregate. Any rules you write directly in an aggregated ClusterRole get overwritten.
This is how the built-in view, edit, and admin ClusterRoles are extensible. When you install a CRD operator (Prometheus, cert-manager, Argo), the operator can ship a ClusterRole with the label rbac.authorization.k8s.io/aggregate-to-view: "true", and the built-in view role automatically includes those permissions.
# This ClusterRole aggregates permissions from any ClusterRole labelled
# rbac.mycompany.com/aggregate-to-monitoring: "true"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-aggregate
aggregationRule:
clusterRoleSelectors:
- matchLabels:
rbac.mycompany.com/aggregate-to-monitoring: "true"
rules: [] # control plane fills this; do not edit manually
---
# A component ClusterRole that feeds into the aggregate
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-pods
labels:
rbac.mycompany.com/aggregate-to-monitoring: "true"
rules:
- apiGroups: [""]
resources: ["pods", "services", "endpoints"]
verbs: ["get", "list", "watch"]
Use aggregated ClusterRoles when multiple teams or operators need to contribute permissions to a shared role without editing each other's manifests.
Debugging with kubectl auth can-i
When a pod or user hits Error from server (Forbidden), the error message tells you exactly what failed. Parse it:
pods is forbidden: User "system:serviceaccount:default:myapp"
cannot list resource "pods" in API group "" in the namespace "production"
That gives you: who (system:serviceaccount:default:myapp), what (list pods), where (namespace production), API group (core).
Step 1: check what the identity can do
# Check permissions for a specific service account in a specific namespace
kubectl auth can-i --list \
--as=system:serviceaccount:default:myapp \
-n production
To check a single action:
kubectl auth can-i list pods \
--as=system:serviceaccount:default:myapp \
-n production
# Expected: "no" (confirming the Forbidden error)
If kubectl auth can-i --as returns an error about impersonation, the user running the check does not have impersonate permission. You need cluster-admin or a role with impersonate on users and service accounts to debug this way.
Step 2: find existing bindings
# Find all RoleBindings and ClusterRoleBindings that reference this service account
kubectl get rolebindings,clusterrolebindings -A -o json | \
jq '.items[] | select(.subjects[]? | .name=="myapp" and .namespace=="default") | {kind, name: .metadata.name, namespace: .metadata.namespace, roleRef: .roleRef}'
Step 3: inspect the role
kubectl describe role <role-name> -n <namespace>
# or for cluster-scoped:
kubectl describe clusterrole <clusterrole-name>
Check that the role includes the right API group, resource, and verb combination. A common root cause: the binding exists but references a role that does not include the verb or resource the workload needs.
Step 4: apply the fix and verify
After creating or updating the Role and RoleBinding, verify:
kubectl auth can-i list pods \
--as=system:serviceaccount:default:myapp \
-n production
# Expected: "yes"
Use kubectl auth reconcile -f rbac.yaml instead of kubectl apply when applying RBAC manifests. Reconcile is the recommended method because it avoids accidentally deleting bindings that are not in your manifest file.
Least-privilege patterns for CI/CD
Pattern 1: namespace-scoped deployer
Give your CI/CD pipeline a service account that can only modify workloads in its target namespace:
apiVersion: v1
kind: ServiceAccount
metadata:
name: ci-deployer
namespace: production
automountServiceAccountToken: false
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: deployer
namespace: production
rules:
- apiGroups: ["apps"]
resources: ["deployments", "statefulsets", "daemonsets"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "configmaps"]
verbs: ["get", "list", "watch", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ci-deployer-binding
namespace: production
subjects:
- kind: ServiceAccount
name: ci-deployer
namespace: production
roleRef:
kind: Role
name: deployer
apiGroup: rbac.authorization.k8s.io
This role cannot create or delete services (only update), cannot touch Secrets or Roles, and has no access outside production.
Pattern 2: short-lived tokens per pipeline run
Do not store a long-lived service account token in your CI secrets. Since Kubernetes 1.24, generate a fresh token per run:
# Generate a token that expires in 1 hour (Kubernetes 1.24+)
TOKEN=$(kubectl create token ci-deployer -n production --duration=3600s)
# Use $TOKEN for this pipeline run only; do not persist it
Pattern 3: read-only audit access using built-in ClusterRole
For security auditors or monitoring tools that need read access in a specific namespace, bind the built-in view ClusterRole through a RoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: auditor-view
namespace: production
subjects:
- kind: User
name: auditor@mycompany.com
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole # reuses the built-in view role
name: view
apiGroup: rbac.authorization.k8s.io
The view ClusterRole explicitly excludes Secrets, so the auditor can inspect workload configuration without accessing secret data.
Common mistakes and security pitfalls
system:masters is not the same as cluster-admin. The cluster-admin ClusterRole granted via ClusterRoleBinding can be revoked by deleting the binding. The system:masters group is hardcoded to bypass all RBAC checks and all authorization webhooks. A user in system:masters cannot be denied anything. OPA/Gatekeeper and Kyverno policies do not apply to them. Never add production identities to this group.
Wildcard permissions grant access to future CRDs. A rule with resources: ["*"] does not just cover current resources. It covers every CRD that anyone installs in the future. The RBAC good practices documentation recommends listing resources explicitly.
list and watch on Secrets expose their contents. Running kubectl get secrets -A -o yaml with only list permission returns all secret data in plaintext. The watch verb also streams full secret data. This is documented as a HIGH-severity risk.
Pod-creation rights are broader than they look. The ability to create pods in a namespace implicitly grants access to every Secret, ConfigMap, and ServiceAccount in that namespace (by mounting them or setting serviceAccountName). Pair RBAC with Pod Security Standards to limit what a pod can actually do at runtime.
RBAC does not isolate namespaces at the network level. By default, every pod in a cluster can communicate with every other pod across all namespaces. RBAC controls API access; network policies control traffic. You need both.
When to escalate
If you have worked through the debugging steps and the permission still does not work as expected, collect the following before asking for help:
- The exact error message (full text, not a summary)
- Output of
kubectl auth can-i --list --as=<identity> -n <namespace> - Output of
kubectl get rolebindings,clusterrolebindings -A -o jsonfiltered for the identity - The YAML of the Role or ClusterRole you expect to apply
- Kubernetes version (
kubectl version --short) - Whether the cluster uses any custom admission webhooks or authorization webhooks (Rancher, OpenShift, and similar platforms add their own authorization layers on top of RBAC)
- Whether the identity is a user (external auth) or a service account (in-cluster)
This information lets someone diagnose whether the problem is in the RBAC policy, in the binding, in a webhook override, or in the authentication layer.
How to prevent recurrence
- Apply RBAC manifests with
kubectl auth reconcile, notkubectl apply. Reconcile is idempotent and avoids accidental deletions. - Use one service account per application. When all apps in a namespace share the
defaultservice account, changing permissions for one app changes them for all. - Audit bindings regularly. Run
kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name=="cluster-admin") | .subjects'to find every identity with cluster-admin. Remove any that should not be there. - Set
automountServiceAccountToken: falseon every service account that does not need API access. - Generate short-lived tokens for CI/CD instead of storing static credentials.