Why Kubernetes RBAC Is So Easy to Get Wrong

Kubernetes RBAC is complex by design β€” it's a powerful, flexible system that can express fine-grained access controls for hundreds of resource types and operations. That flexibility is also why misconfigurations are so common.

The operational pressure to "just get it working" often leads to developers or platform engineers adding broad permissions and planning to restrict them later. In our experience, "later" rarely comes. The broad permission stays in the cluster indefinitely, and the next engineer who looks at it assumes someone else thought about it.

In every cluster assessment we've done: We find at least one ServiceAccount bound to a ClusterRole with verbs: ["*"] or resources: ["*"]. Often it's a monitoring agent, a CI/CD runner, or a backup tool that was given admin-equivalent access because the documentation said "requires cluster admin" and nobody questioned it.

The Wildcard Trap

Wildcards in RBAC rules are the fastest path to overpermission. verbs: ["*"] means "all verbs including create, delete, escalate, impersonate." resources: ["*"] means "all resources including secrets, rolebindings, clusterrolebindings." The combination is functionally equivalent to cluster-admin.

overpermissive-clusterrole.yaml YAML β€” dangerous pattern
# This is effectively cluster-admin:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: monitoring-agent
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs:    ["*"]
# If this ServiceAccount is compromised, the attacker owns the cluster.

ClusterRole vs Role: Why Scope Matters

A Role is namespace-scoped β€” its permissions apply only within a specific namespace. A ClusterRole is cluster-scoped β€” it applies across all namespaces, or to cluster-scoped resources.

The dangerous pattern is using ClusterRoleBinding to bind a ClusterRole to a ServiceAccount that only needs namespace-scoped access. If the ServiceAccount is compromised, it has access to resources in every namespace β€” including kube-system, where control plane components and their secrets live.

Use namespace-scoped Role + RoleBinding wherever possible. Only use ClusterRole when the resource is genuinely cluster-scoped (nodes, namespaces, persistent volumes) or when the access needs to span multiple namespaces.

Dangerous Permissions: What to Watch For

Not all overpermission is equal. These specific permissions are most dangerous from a privilege escalation perspective:

  • pods/exec create: Allows executing commands inside any pod. If combined with access to pods in kube-system, an attacker can exec into a control plane component and extract cluster admin credentials.
  • rolebindings or clusterrolebindings create/update: An attacker who can create RoleBindings can bind themselves to cluster-admin, completing a full privilege escalation chain.
  • escalate verb on roles: Allows creating Roles with permissions the creator doesn't currently have. A dangerous vector for privilege escalation.
  • bind verb on roles: Allows binding a Role or ClusterRole without needing to have those permissions yourself.
  • impersonate verb on users/serviceaccounts: Allows API requests to be made as another user or ServiceAccount β€” including higher-privileged ones.
  • secrets get/list: Allows reading secrets in the namespace, which often contains database passwords, API keys, and other credentials.

How Privilege Escalation Chains Work in Practice

The typical chain from compromised pod to cluster-admin looks like this:

  1. Attacker compromises a pod via RCE vulnerability in the application
  2. Pod has an auto-mounted ServiceAccount token (default in most clusters)
  3. Attacker reads the ServiceAccount token from /var/run/secrets/kubernetes.io/serviceaccount/token
  4. ServiceAccount has secrets: get in the namespace β€” attacker reads all secrets, finds database credentials and more tokens
  5. ServiceAccount has pods: create in kube-system β€” attacker creates a privileged pod mounted to the host filesystem
  6. Attacker reads /etc/kubernetes/admin.conf from the host filesystem β€” cluster-admin kubeconfig obtained
pod-escalation.yaml YAML β€” privileged pod for host access
# Malicious pod that mounts host filesystem:
apiVersion: v1
kind: Pod
metadata:
  name: escape-pod
  namespace: kube-system
spec:
  containers:
  - name: shell
    image: alpine
    command: ["/bin/sh", "-c", "cat /host/etc/kubernetes/admin.conf"]
    volumeMounts:
    - mountPath: /host
      name: host-root
  volumes:
  - name: host-root
    hostPath:
      path: /
# If the ServiceAccount can create pods in kube-system,
# this gives cluster-admin access within seconds.

The ServiceAccount Token Attack

By default, Kubernetes mounts a ServiceAccount token into every pod at /var/run/secrets/kubernetes.io/serviceaccount/token. Most applications don't need this token β€” it's used for in-cluster API access, and most application pods don't make Kubernetes API calls.

Any attacker who can exec into a pod or read its files can extract this token. If the ServiceAccount has any significant permissions, the token is a lateral movement opportunity.

The fix is simple: set automountServiceAccountToken: false in pod specs and ServiceAccount definitions for workloads that don't need Kubernetes API access. For workloads that do need it, scope the ServiceAccount to the minimum required permissions.

How to Audit Your RBAC with kubectl and Tools

rbac-audit.sh Shell β€” kubectl audit commands
# Check what a ServiceAccount can do
kubectl auth can-i --list \
  --as=system:serviceaccount:default:myapp-sa

# Find all ClusterRoleBindings that reference cluster-admin
kubectl get clusterrolebindings -o json | \
  jq '.items[] | select(.roleRef.name=="cluster-admin") | .metadata.name'

# Find RoleBindings/ClusterRoleBindings with wildcard verbs
kubectl get clusterroles -o json | \
  jq '.items[] | select(.rules[]?.verbs | contains(["*"])) | .metadata.name'

# Find ServiceAccounts with auto-mounted tokens in default namespace
kubectl get pods -o json | \
  jq '.items[] | select(.spec.automountServiceAccountToken != false) | .metadata.name'

# Use rbac-tool for comprehensive analysis
kubectl rbac-tool who-can create pods -n kube-system
kubectl rbac-tool who-can get secrets

Tools for deeper RBAC analysis: rbac-tool, kube-bench, Popeye, and Kubescape all provide RBAC analysis beyond what kubectl's built-in commands offer. Kubescape is particularly useful for NSA/CISA Kubernetes hardening benchmark compliance.

Least-Privilege RBAC Patterns

least-privilege-role.yaml YAML β€” scoped role example
# Scoped Role for an application that only needs to read ConfigMaps
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: production
  name: configmap-reader
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get", "list"]
  # Only specific named ConfigMaps, not all ConfigMaps
  resourceNames: ["app-config", "feature-flags"]
---
# Disable auto-mounted service account token
apiVersion: v1
kind: ServiceAccount
metadata:
  name: myapp-sa
  namespace: production
automountServiceAccountToken: false

IaC Scanning for Kubernetes Manifests

RBAC misconfigurations in Kubernetes manifests are detectable at the IaC scanning stage β€” before they're deployed. Tools like Checkov, KICS, Terrascan, and AquilaX's IaC scanner can flag overpermissive RBAC rules in Helm charts and raw Kubernetes YAML.

The patterns to detect: wildcard verbs or resources in roles, ClusterRoleBindings for resources that could be namespace-scoped, ServiceAccounts with automountServiceAccountToken: true (or not set), and bindings that grant pods/exec or secrets access broadly.

Integrate IaC scanning into your GitOps workflow so that every Kubernetes manifest change is checked before it reaches the cluster. A RBAC misconfiguration that's caught in a PR review is far less expensive than one discovered during a security audit.

Scan Your Kubernetes Manifests for RBAC Issues

AquilaX IaC scanner detects Kubernetes RBAC misconfigurations and overpermissive bindings across your Helm charts and Kubernetes manifests.

Start Free Scan