Kubernetes workload identity: IRSA on EKS, Workload Identity on GKE and AKS

Static cloud credentials in Kubernetes Secrets are a breach waiting to happen. Every major cloud provider offers a workload identity mechanism that lets pods authenticate to cloud services using short-lived, automatically rotated tokens instead. This guide covers the setup for EKS (IRSA and Pod Identity), GKE (Workload Identity Federation), and AKS (Microsoft Entra Workload ID), with a comparison table and decision guide at the end.

Table of contents

Goal

After completing this guide you will have pods authenticating to cloud services (S3, GCS, Azure Storage, and similar) using short-lived, automatically rotating credentials, with zero static access keys stored anywhere in your cluster.

Prerequisites

  • An EKS, GKE, or AKS cluster (at least one) with the minimum versions listed in the comparison table
  • kubectl access with permissions to create ServiceAccounts and annotate them
  • Cloud-side IAM permissions to create roles (AWS), IAM policy bindings (GCP), or managed identities (Azure)
  • CLI tools installed: eksctl and aws (EKS), gcloud (GKE), or az (AKS)
  • Familiarity with Kubernetes RBAC and service accounts

Why static credentials in Secrets are not enough

Kubernetes Secrets are stored unencrypted in etcd by default. Base64 encoding is serialization, not security. Anyone with API access or etcd backups can read every Secret in the cluster. Storing a long-lived AWS access key or GCP service account JSON in a Secret means that key can be exfiltrated and used indefinitely, from anywhere, by anyone who obtained it.

Short-lived tokens fix both problems at once. Workload identity issues tokens that expire within an hour and cannot be replayed outside the original trust chain. No key to rotate manually, no credential to leak to Git. If you want a deeper comparison of how Sealed Secrets, ESO, and Vault handle the etcd-exposure problem, see the secrets management comparison.

The shared mechanism: OIDC federation

All three providers use the same underlying protocol:

  1. The Kubernetes cluster acts as an OIDC identity provider, publishing a discovery endpoint and signing keys.
  2. A pod receives a short-lived, audience-scoped JWT (the projected service account token) signed by the cluster's private key.
  3. The pod presents that JWT to the cloud provider's Security Token Service (AWS STS, Google STS, Microsoft Entra ID).
  4. The STS validates the JWT signature against the cluster's published keys, checks trust conditions, and issues short-lived cloud credentials.
  5. The application uses those credentials transparently through standard SDKs. No code changes required.

The cloud never stores a copy of the Kubernetes private key. It only needs the public signing keys to verify tokens. This is the security property that makes the entire model work.

EKS: IRSA and Pod Identity

AWS offers two mechanisms. IRSA (IAM Roles for Service Accounts) has been available since 2019. EKS Pod Identity launched in November 2023 as a simpler alternative.

IRSA setup

IRSA works by projecting an OIDC JWT into the pod at /var/run/secrets/eks.amazonaws.com/serviceaccount/token. The AWS SDK reads this path via the AWS_WEB_IDENTITY_TOKEN_FILE environment variable and calls STS AssumeRoleWithWebIdentity automatically.

Step 1. Associate an IAM OIDC provider with the cluster (once per cluster):

# Creates the OIDC provider in IAM for your EKS cluster
eksctl utils associate-iam-oidc-provider \
  --cluster=payments-prod \
  --approve

If your cluster uses a VPC endpoint, this command may fail with NXDOMAIN on oidc.eks.<region>.amazonaws.com. Run it from outside the VPC or configure split-horizon DNS in Route 53 Resolver.

Step 2. Create an IAM role with a trust policy scoped to a specific service account:

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": {
      "Federated": "arn:aws:iam::123456789012:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/ABCDEF1234567890"
    },
    "Action": "sts:AssumeRoleWithWebIdentity",
    "Condition": {
      "StringEquals": {
        "oidc.eks.eu-west-1.amazonaws.com/id/ABCDEF1234567890:sub": "system:serviceaccount:payments:s3-reader"
      }
    }
  }]
}

The StringEquals condition on :sub is non-negotiable. Without it, every service account in the cluster can assume this role.

Step 3. Annotate the Kubernetes ServiceAccount:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: s3-reader
  namespace: payments
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/payments-s3-reader

Step 4. Reference the service account in your pod spec and deploy. The SDK handles the rest. Verify by checking the pod's environment:

kubectl exec -n payments deploy/payment-processor -- env | grep AWS_
# Expected output:
# AWS_ROLE_ARN=arn:aws:iam::123456789012:role/payments-s3-reader
# AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token

Pod Identity setup

Pod Identity requires no OIDC provider registration. An EKS Pod Identity Agent DaemonSet runs on each node and serves credentials directly at 169.254.170.23.

Step 1. Install the Pod Identity Agent add-on (skip if using EKS Auto Mode):

aws eks create-addon \
  --cluster-name payments-prod \
  --addon-name eks-pod-identity-agent

Step 2. Create an IAM role with a reusable trust policy (not cluster-specific):

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": {"Service": "pods.eks.amazonaws.com"},
    "Action": ["sts:AssumeRole", "sts:TagSession"]
  }]
}

This same trust policy works across every EKS cluster in your account. No per-cluster OIDC ARN.

Step 3. Create the pod identity association:

aws eks create-pod-identity-association \
  --cluster-name payments-prod \
  --namespace payments \
  --service-account s3-reader \
  --role-arn arn:aws:iam::123456789012:role/payments-s3-reader

No annotation needed on the service account. The mapping lives in the EKS control plane.

If your environment uses an HTTP proxy, add 169.254.170.23 (IPv4) or [fd00:ec2::23] (IPv6) to NO_PROXY so credential requests bypass the proxy.

IRSA caveats

  • If IMDS is not restricted at the node level, pods can still fall back to the EC2 node role regardless of IRSA configuration.
  • Pods with hostNetwork: true always have IMDS access.
  • Cross-account requires manual role chaining: the pod assumes a role in its own account, which then assumes a role in the target account.
  • IRSA requires Kubernetes 1.12+ for ProjectedServiceAccountToken support.

Pod Identity caveats

  • Does not support Fargate, Windows pods, or EKS Anywhere.
  • Requires Kubernetes 1.24+ (platform version eks.4 for 1.28+).
  • Maximum 5,000 pod identity associations per cluster.
  • Native cross-account support (targetRoleArn) was added in June 2025.

GKE: Workload Identity Federation

GKE Workload Identity Federation uses a per-project workload identity pool (PROJECT_ID.svc.id.goog) with the cluster registered as an identity provider. A GKE Metadata Server DaemonSet intercepts metadata requests and exchanges Kubernetes service account JWTs for Google access tokens via Google's Security Token Service.

On Autopilot clusters, Workload Identity Federation is always enabled. No setup needed.

Standard cluster setup

Step 1. Enable the workload identity pool on the cluster:

gcloud container clusters update payments-prod \
  --location=europe-west4 \
  --workload-pool=myproject-123.svc.id.goog

Step 2. Enable the GKE Metadata Server on the node pool:

gcloud container node-pools update default-pool \
  --cluster=payments-prod \
  --location=europe-west4 \
  --workload-metadata=GKE_METADATA

Enabling this on an existing node pool takes effect immediately for all workloads in that pool. Plan for a brief disruption.

Step 3. Grant IAM permissions directly to the Kubernetes ServiceAccount principal (the recommended method since direct KSA binding became available):

gcloud projects add-iam-policy-binding projects/myproject-123 \
  --role=roles/storage.objectViewer \
  --member="principal://iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/myproject-123.svc.id.goog/subject/ns/payments/sa/gcs-reader" \
  --condition=None

This approach skips the legacy Google service account (GSA) impersonation pattern entirely. No GSA to create, no workloadIdentityUser binding, no annotation on the KSA. Some Google Cloud APIs do not yet support direct federation and still require the legacy GSA impersonation path.

Step 4. Create the Kubernetes ServiceAccount and deploy your workload:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: gcs-reader
  namespace: payments

Verify:

kubectl exec -n payments deploy/payment-processor -- \
  curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/email
# Expected: principal://iam.googleapis.com/... (or GSA email if using legacy method)

GKE caveats

  • Pods with hostNetwork: true bypass Workload Identity Federation.
  • The metadata server has a 500 concurrent connection limit per node; excess calls queue.
  • Workloads sharing the same namespace and service account name across clusters in the same project share the same identity. Untrusted clusters in the same project can impersonate each other's workloads. Mitigate by using separate projects or unique namespace prefixes.
  • Token exchange API quota: 6,000 requests per minute per project.

AKS: Microsoft Entra Workload ID

AKS Workload Identity replaced the deprecated AAD Pod Identity (end-of-support September 2025). A mutating admission webhook injects environment variables and a projected token into pods labeled with azure.workload.identity/use: "true". The Azure Identity client library exchanges the token for a Microsoft Entra access token.

Setup

Step 1. Enable the OIDC issuer and Workload Identity on the cluster:

# New cluster
az aks create \
  --resource-group payments-rg \
  --name payments-prod \
  --enable-oidc-issuer \
  --enable-workload-identity \
  --generate-ssh-keys

# Existing cluster
az aks update \
  --resource-group payments-rg \
  --name payments-prod \
  --enable-oidc-issuer \
  --enable-workload-identity

Step 2. Retrieve the OIDC issuer URL:

export AKS_OIDC_ISSUER="$(az aks show \
  --name payments-prod \
  --resource-group payments-rg \
  --query "oidcIssuerProfile.issuerUrl" \
  --output tsv)"

Step 3. Create a user-assigned managed identity and grant it a role:

az identity create \
  --name payments-keyvault-reader \
  --resource-group payments-rg \
  --location westeurope

export CLIENT_ID="$(az identity show \
  --resource-group payments-rg \
  --name payments-keyvault-reader \
  --query 'clientId' --output tsv)"

Step 4. Create the Kubernetes ServiceAccount with the client ID annotation:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: keyvault-reader
  namespace: payments
  annotations:
    azure.workload.identity/client-id: "${CLIENT_ID}"

Step 5. Create the federated identity credential that binds the managed identity to the Kubernetes service account:

az identity federated-credential create \
  --name payments-keyvault-fedcred \
  --identity-name payments-keyvault-reader \
  --resource-group payments-rg \
  --issuer "${AKS_OIDC_ISSUER}" \
  --subject "system:serviceaccount:payments:keyvault-reader" \
  --audience api://AzureADTokenExchange

Federated credential propagation takes a few seconds. Token requests immediately after creation may fail with a 401 until the cache refreshes. Azure RBAC role assignment propagation can take up to 10 minutes.

Step 6. Add the required label to your pod spec and deploy:

metadata:
  labels:
    azure.workload.identity/use: "true"   # triggers the webhook
spec:
  serviceAccountName: keyvault-reader

Without this label, the webhook does not inject the identity. The pod runs but cloud authentication fails silently.

AKS caveats

  • Maximum 20 federated identity credentials per managed identity.
  • Virtual nodes (Virtual Kubelet) are not supported.
  • Requires AKS 1.22+ and Azure CLI 2.47.0+.
  • Changing the client-id annotation requires a pod restart.
  • Minimum SDK versions: Azure.Identity 1.9.0 (.NET), azidentity 1.3.0 (Go), azure-identity 1.9.0 (Java), @azure/identity 3.2.0 (Node.js), azure-identity 1.13.0 (Python).

Cross-provider comparison

Capability EKS IRSA EKS Pod Identity GKE WIF AKS Workload ID
Token issuer Kubernetes OIDC Pod Identity Agent GKE Metadata Server Kubernetes OIDC
Cloud STS AWS STS Agent endpoint Google STS Microsoft Entra ID
IAM object IAM Role (OIDC trust) IAM Role (service trust) IAM principal binding User-assigned managed identity
Token lifetime 1 hour Short-lived (agent-managed) 1 hour 1 hour (configurable 1-24h)
Min K8s version 1.12 1.24 1.21 (GA) 1.22
Fargate support Yes No N/A N/A
Windows support Yes No Yes Yes
Cross-account Manual role chaining Native (targetRoleArn, 2025) Cross-project via Fleet WIF Cross-tenant via federated cred
Deprecated predecessor kiam, kube2iam N/A Legacy GSA impersonation AAD Pod Identity

IRSA vs. Pod Identity: which to use

Pod Identity for new clusters. Simpler setup, no OIDC provider registration, reusable trust policies across clusters, and native cross-account support since June 2025.

IRSA for existing setups. No compelling reason to migrate a working IRSA configuration. IRSA is also the only option for Fargate, Windows pods, EKS Anywhere, and clusters below Kubernetes 1.24.

Both can coexist in the same cluster. Migration is gradual, not all-or-nothing.

Security best practices across providers

Block IMDS at the node level (EKS). If the EC2 Instance Metadata Service is unrestricted, pods can acquire the node's IAM role regardless of IRSA or Pod Identity configuration. Set the IMDS hop limit to 1 or block it entirely for non-hostNetwork pods.

One service account per workload. Create a dedicated ServiceAccount and cloud identity per application. Do not reuse the default service account. Do not share a single IAM role across unrelated workloads. This aligns with the least-privilege service account principle from RBAC.

Scope trust conditions tightly. On EKS, always include the :sub condition in the IRSA trust policy. On GKE, separate untrusted clusters into different projects. On AKS, use distinct managed identities per workload.

Audit credential exchanges. AWS logs all AssumeRoleWithWebIdentity calls in CloudTrail. GCP logs principal identifiers (including namespace and SA name) in Cloud Audit Logs. Azure captures federated token exchanges in Entra sign-in logs. Set up alerts for unexpected role assumptions.

Disable automountServiceAccountToken on non-API workloads. If a pod does not need the Kubernetes API, set automountServiceAccountToken: false on its service account. This does not affect workload identity tokens (those use a separate projected volume), but it reduces the attack surface if the pod is compromised. The mechanics of the standard mount, including why the token at /var/run/secrets/kubernetes.io/serviceaccount/token rotates on its own and what audience binding adds beyond expiry, are covered in Kubernetes service account tokens.

Verify the final result

After completing the setup for your provider, confirm that the workload can access cloud resources:

# EKS: verify the pod can list S3 buckets
kubectl exec -n payments deploy/payment-processor -- aws s3 ls

# GKE: verify the pod can list GCS buckets
kubectl exec -n payments deploy/payment-processor -- gcloud storage ls

# AKS: verify the pod can read a Key Vault secret
kubectl exec -n payments deploy/payment-processor -- \
  az keyvault secret show --vault-name payments-vault --name db-password

If the command succeeds, the workload identity chain is working. The pod authenticated to the cloud provider without any static credential.

Common troubleshooting

Pod shows An error occurred (AccessDenied) on EKS. Check that the OIDC provider is associated (aws eks describe-cluster --name <cluster> --query "cluster.identity.oidc.issuer"), the trust policy includes the :sub condition matching the correct namespace and service account name, and the service account annotation matches the role ARN exactly.

Pod gets 403 on GKE despite IAM binding. Verify that the node pool has --workload-metadata=GKE_METADATA enabled. Without the GKE Metadata Server, the pod falls back to the node's default service account.

AKS pod silently fails to authenticate. Confirm the pod has the azure.workload.identity/use: "true" label. Without it, the mutating webhook does not inject the projected token. Also check that the federated credential has propagated (wait 30 seconds after creation).

Token refresh errors in long-running pods. All three providers issue tokens that expire (typically 1 hour). Standard SDKs refresh tokens automatically by re-reading the token file from disk. If you use a custom HTTP client or an old SDK version, the refresh may not happen. Update to a supported SDK version.

When to escalate

If workload identity is not working after verifying the setup, collect:

  • The exact error message and HTTP status code
  • Cloud provider: EKS, GKE, or AKS, and cluster version (kubectl version --short)
  • The ServiceAccount YAML (annotations, namespace)
  • The IAM trust policy or federated credential configuration
  • kubectl describe pod <name> -n <namespace> output (check for webhook injection or projected volume mounts)
  • CloudTrail, Cloud Audit Logs, or Entra sign-in logs for the denied request
  • Whether IMDS is restricted (EKS) or the node pool has GKE Metadata Server enabled (GKE)

Recurring server or deployment issues?

I help teams make production reliable with CI/CD, Kubernetes, and cloud—so fixes stick and deploys stop being stressful.

Explore DevOps consultancy

Search this site

Start typing to search, or browse the knowledge base and blog.