Kubernetes Pod Security Standards: replacing PodSecurityPolicy after 1.25

PodSecurityPolicy was removed in Kubernetes 1.25. Its replacement, Pod Security Admission (PSA) with Pod Security Standards (PSS), is built in and enabled by default on every cluster, but it does nothing unless you label your namespaces. This guide walks through the three profiles, the three enforcement modes, the migration path from PSP, and the cluster-level configuration that prevents new namespaces from running unprotected.

Table of contents

Goal

After completing this guide you will have migrated your cluster from PodSecurityPolicy to Pod Security Admission with labeled namespaces, a cluster-level safety default, and no gap in pod security enforcement.

Prerequisites

  • A Kubernetes cluster running 1.25 or later (PSA is GA since 1.25)
  • kubectl access with permissions to label namespaces, view PSPs, and manage RBAC bindings
  • If you are still on Kubernetes 1.22 through 1.24, PSA is available as a beta admission plugin; the label syntax is the same but the AdmissionConfiguration API version differs (see step 6)
  • Familiarity with RBAC, since namespace label permissions determine who can weaken enforcement

The PSP removal timeline

PodSecurityPolicy was deprecated in Kubernetes 1.21 (April 2021) and removed entirely in 1.25 (August 2022). The deprecation was not a cosmetic label. PSP had design flaws that could not be fixed without breaking changes: a confusing application mechanism where it was easy to accidentally grant broader permissions than intended, no audit mode for safe rollout, and a mutation capability that made it impossible to reason about what actually happened to a pod spec.

When PSP disappears (through deletion or a cluster upgrade to 1.25), any namespace without PSA labels has no pod security enforcement at all. Pods can run as root, mount host paths, and request arbitrary capabilities. This is the single most dangerous moment in the migration.

PSP removal does not affect securityContext in pod specs. The securityContext field is the pod-level security configuration; it is unrelated to the admission policy that validates it.

Three profiles: privileged, baseline, restricted

Pod Security Standards define three profiles that are cumulative in strictness:

Privileged is unrestricted. No security constraints. Use it for infrastructure namespaces (kube-system, monitoring, CNI plugins) where workloads legitimately need host access.

Baseline blocks known privilege escalation paths while remaining adoptable for most application workloads. It forbids hostNetwork, hostPID, hostIPC, privileged containers, hostPath volumes, and restricts capabilities to a safe subset (NET_BIND_SERVICE, CHOWN, SETUID, SETGID, and others). Seccomp cannot be Unconfined. SELinux types are limited to container_t, container_init_t, and container_kvm_t.

Restricted adds everything in Baseline plus: runAsNonRoot must be true, allowPrivilegeEscalation must be false, capabilities must drop ALL (only NET_BIND_SERVICE may be re-added), and a seccompProfile of RuntimeDefault or Localhost is mandatory.

Every container in the pod must comply. If one init container fails validation, the entire pod is rejected.

Which profile for which namespace

Namespace type Recommended profile
Application workloads (hardened) restricted
Application workloads (standard) baseline
Infrastructure (CNI, logging, monitoring, ingress) privileged
kube-system, kube-public, kube-node-lease Exempt via AdmissionConfiguration

The Restricted profile fails for the majority of workloads that have not been explicitly hardened. Default nginx images run as root. Calico, Flannel, and Cilium need NET_ADMIN, SYS_ADMIN, and host namespaces. Log shippers need hostPath volumes to read /var/log. Start enforcement at Baseline for application namespaces, then tighten to Restricted after fixing workload securityContext fields.

Three enforcement modes: enforce, audit, warn

PSA has three modes that you set independently per namespace:

Mode Blocks pods? What it reports Applies to
enforce Yes Rejection error at pod creation Pod objects only
audit No Annotation in Kubernetes audit log Workload resources + pods
warn No Warning: message in kubectl output Workload resources + pods

The asymmetry between enforce and the other two modes is the most commonly misunderstood behavior. enforce evaluates only the resulting Pod object, not the Deployment or StatefulSet that created it. You can kubectl apply a Deployment with a violating pod template and it will succeed. The block happens when the controller creates the actual Pod. The Deployment sits in your cluster while its ReplicaSet fails to schedule pods in a loop.

audit and warn evaluate workload resources (Deployments, Jobs, StatefulSets) at the template level, catching violations before pods are created. This is why running audit and warn first is not optional; it is the only way to catch template-level problems before enforce blocks pods.

warn does not block anything. It sends a Warning: prefix to the kubectl client. Operators sometimes see warn output and assume their workloads are protected. They are not protected until enforce is applied.

Step 1: audit your current PSPs

Before migrating, determine what your PSPs actually enforce. PSA cannot replicate all PSP functionality. Specifically, PSA is non-mutating: it validates but never modifies pod specs. If you relied on PSP to set defaults (like defaultAllowPrivilegeEscalation or runAsGroup: MustRunAs), you need a mutating admission webhook or a policy engine like Kyverno as a supplement.

List your current PSPs and identify mutating fields:

# List all PSPs (only works before 1.25)
kubectl get psp

# Export for analysis
kubectl get psp -o yaml > current-psps.yaml

Remove these mutating-only fields from your analysis (PSA has no equivalent):

  • defaultAllowPrivilegeEscalation
  • defaultAddCapabilities
  • runtimeClass.defaultRuntimeClassName
  • Seccomp and AppArmor default profile annotations

Also note fields that PSA's three fixed profiles do not validate: allowedHostPaths (path-specific restrictions), allowedFlexVolumes, allowedCSIDrivers, forbiddenSysctls, runAsGroup (MustRunAs strategy), supplementalGroups, and fsGroup. If your security posture depends on these, PSA alone is insufficient.

Step 2: label namespaces with audit and warn

Apply non-blocking labels to all namespaces to observe violations without disrupting workloads:

apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/audit-version: latest    # forward-looking
    pod-security.kubernetes.io/warn: restricted
    pod-security.kubernetes.io/warn-version: latest

Apply to every namespace at once:

# Label all non-system namespaces with audit + warn at restricted level
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do
  kubectl label --overwrite ns "$ns" \
    pod-security.kubernetes.io/audit=restricted \
    pod-security.kubernetes.io/audit-version=latest \
    pod-security.kubernetes.io/warn=restricted \
    pod-security.kubernetes.io/warn-version=latest
done

Expected output: each kubectl label command prints namespace/<name> labeled.

Step 3: fix workload violations

Re-apply or restart your workloads to trigger evaluation against the new labels. Watch for Warning: messages in kubectl output:

kubectl rollout restart deployment -n production
# Expected: Warning: would violate PodSecurity "restricted:latest":
# allowPrivilegeEscalation != false, ...

Check the Kubernetes audit log for pod-security annotations that identify violations in audit mode.

The typical fixes involve adding or tightening securityContext at both the pod and container level:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  namespace: production
spec:
  template:
    spec:
      securityContext:
        runAsNonRoot: true             # pod-level
        seccompProfile:
          type: RuntimeDefault         # required by restricted
      containers:
      - name: web
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          capabilities:
            drop: ["ALL"]              # drop all, then add back what you need
            add: ["NET_BIND_SERVICE"]  # only if the container binds below 1024

Iterate until kubectl apply produces no warnings. System namespaces (where CNI, logging, and monitoring run) will show warnings under the restricted profile. That is expected; those namespaces will get the privileged profile in the next step.

Step 4: enable enforce mode per namespace

Once audit and warn show no violations for a namespace, apply the enforce label:

kubectl label --overwrite ns production \
  pod-security.kubernetes.io/enforce=baseline \
  pod-security.kubernetes.io/enforce-version=v1.31

Pin enforce-version to a specific Kubernetes minor version. This prevents a cluster upgrade from introducing new policy checks that suddenly block previously compliant pods. Use latest for audit and warn versions so you get early visibility into future requirements.

The recommended pattern for application namespaces that have been fully hardened:

labels:
  pod-security.kubernetes.io/enforce: restricted
  pod-security.kubernetes.io/enforce-version: v1.31    # pinned
  pod-security.kubernetes.io/audit: restricted
  pod-security.kubernetes.io/audit-version: latest      # forward-looking
  pod-security.kubernetes.io/warn: restricted
  pod-security.kubernetes.io/warn-version: latest

For namespaces that are not yet fully hardened, enforce baseline and audit/warn restricted. Roll out namespace by namespace. Do not flip every namespace to enforce in a single operation.

Step 5: remove PSP objects and RBAC bindings

After PSA enforce is active on all namespaces, clean up PSP artifacts:

# Verify no remaining PSP RBAC dependencies
kubectl get clusterrolebinding -o json | jq '.items[] | select(.roleRef.name | test("psp")) | .metadata.name'
kubectl get rolebinding -A -o json | jq '.items[] | select(.roleRef.name | test("psp")) | {ns: .metadata.namespace, name: .metadata.name}'

# Remove PSP RBAC bindings
kubectl delete clusterrolebinding <psp-binding-name>
kubectl delete rolebinding <psp-binding-name> -n <namespace>

# Delete PSPs (only on clusters still below 1.25)
kubectl delete psp --all

On Kubernetes 1.25+, PSP objects no longer exist. The risk is that the upgrade silently removed PSP enforcement while your namespaces had no PSA labels. If you are reading this after upgrading, check whether your namespaces are labeled. Unlabeled namespaces have no enforcement.

Step 6: set cluster-level defaults

Namespace labels protect labeled namespaces. New namespaces created without labels have no protection. A cluster-level AdmissionConfiguration catches this gap:

# /etc/kubernetes/psa-config.yaml
apiVersion: apiserver.config.k8s.io/v1       # v1 for K8s 1.25+
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
  configuration:
    apiVersion: pod-security.admission.config.k8s.io/v1
    kind: PodSecurityConfiguration
    defaults:
      enforce: "baseline"
      enforce-version: "latest"
      audit: "restricted"
      audit-version: "latest"
      warn: "restricted"
      warn-version: "latest"
    exemptions:
      usernames: []
      runtimeClasses: []
      namespaces:
        - kube-system
        - kube-public
        - kube-node-lease

Reference this file in the API server configuration:

# For kubeadm clusters, add to ClusterConfiguration:
# apiServer:
#   extraArgs:
#     admission-control-config-file: /etc/kubernetes/psa-config.yaml
#   extraVolumes:
#   - name: psa-config
#     hostPath: /etc/kubernetes/psa-config.yaml
#     mountPath: /etc/kubernetes/psa-config.yaml
#     readOnly: true

The AdmissionConfiguration API version depends on your Kubernetes version: v1 for 1.25+, v1beta1 for 1.23 through 1.24, v1alpha1 for 1.22.

On managed clusters (EKS, GKE, AKS): all three providers enable PSA by default but set the cluster-level default to privileged for all modes, meaning no restrictions are applied unless you explicitly label namespaces. You typically cannot modify the AdmissionConfiguration directly on managed clusters. Namespace labels are your primary configuration surface.

Step 7: update automation for new namespaces

Every namespace provisioning tool (Helm charts, Kustomize bases, Terraform modules, CI/CD templates) must apply PSA labels to new namespaces. Without automation, the next namespace someone creates through kubectl create ns runs unprotected.

If you use a GitOps tool like ArgoCD, add the labels to your namespace manifests in the Git repository. If you use Terraform with the Kubernetes provider, add the labels to your kubernetes_namespace resources.

Verify the final result

Check that all namespaces have enforcement labels:

kubectl get ns --show-labels | grep pod-security

Expected output: every namespace shows pod-security.kubernetes.io/enforce with a value of baseline or restricted. System namespaces either show privileged or are listed in the AdmissionConfiguration exemptions.

Test that enforcement actually blocks violations:

kubectl run test-privileged --image=nginx \
  --overrides='{"spec":{"containers":[{"name":"nginx","image":"nginx","securityContext":{"privileged":true}}]}}' \
  -n production --dry-run=server
# Expected: Error from server (Forbidden): ... violates PodSecurity "baseline:..."

If the test pod is accepted, the namespace labels are missing or set to privileged.

Exemptions and their risks

Exemptions are configured in the AdmissionConfiguration and bypass all PSA modes (enforce, audit, and warn). Three dimensions exist:

  1. Namespaces (commonly kube-system)
  2. Usernames (e.g., CI system accounts)
  3. RuntimeClassNames (pods using specific runtime classes)

Do not exempt controller service accounts like system:serviceaccount:kube-system:replicaset-controller. Exempting a controller effectively exempts any user who can create the corresponding workload resource, because the pod is ultimately created by the controller's identity.

Anyone with update or patch permissions on namespace objects can remove PSA labels and disable enforcement. Lock down namespace label permissions through RBAC or add a validating admission webhook to prevent unauthorized label changes.

When PSA is not enough

PSA's three fixed profiles cover most use cases. You need a third-party policy engine when you require:

  • Per-ServiceAccount policies within a namespace (PSA is namespace-scoped only)
  • Mutation (automatically defaulting security fields on pods that do not set them)
  • Policies on non-pod resources (NetworkPolicies, ConfigMaps, custom resources)
  • Granular custom rules beyond the three fixed profiles

Kyverno (CNCF Incubating) writes policies in YAML and supports validation, mutation, and generation. OPA/Gatekeeper (CNCF Graduate) uses the Rego language and is more flexible but has a steeper learning curve.

The recommended approach: use PSA as the built-in baseline (zero dependencies, always enabled) and add Kyverno or Gatekeeper for rules PSA cannot express. Do not replace PSA with a policy engine; layer on top of it.

Common troubleshooting

Pods rejected but the Deployment applied successfully. This is the enforce-mode asymmetry described above. Check the ReplicaSet events: kubectl describe rs <replicaset-name> -n <namespace>. The events show the pod-level rejection. Fix the pod template's securityContext.

No warnings appear after labeling. The warn and audit labels evaluate workloads at apply time. Existing pods are not re-evaluated. Restart or re-apply your workloads: kubectl rollout restart deployment -n <namespace>.

Workloads in kube-system break after labeling. System components need the privileged profile. Either label kube-system as privileged or exempt it in the AdmissionConfiguration.

Pods pass warn mode but fail enforce mode. This can happen when the pod template and the actual pod differ (admission webhooks that inject sidecars, for example). The injected sidecar may violate the policy even though the template passed. Check the actual pod spec, not just the Deployment template.

When to escalate

If pods are being rejected and you cannot identify which field violates the policy, collect:

  • The full error message from kubectl describe on the ReplicaSet or pod
  • The namespace labels: kubectl get ns <namespace> --show-labels
  • The pod spec: kubectl get pod <pod-name> -n <namespace> -o yaml
  • Kubernetes version: kubectl version
  • Whether any admission webhooks mutate pod specs: kubectl get mutatingwebhookconfigurations
  • The cluster-level AdmissionConfiguration (if you have access to the API server config)

Recurring server or deployment issues?

I help teams make production reliable with CI/CD, Kubernetes, and cloud—so fixes stick and deploys stop being stressful.

Explore DevOps consultancy

Search this site

Start typing to search, or browse the knowledge base and blog.