What Kyverno is
Kyverno is a policy engine designed specifically for Kubernetes. It runs as a set of admission controllers inside your cluster and intercepts API requests before they are persisted to etcd. The project graduated within the CNCF in March 2026, after entering the CNCF Sandbox in 2020 and reaching Incubating status in 2022.
The defining design choice: policies are Kubernetes resources. You write them in YAML. You apply them with kubectl. You version them in Git alongside your manifests. There is no separate language to learn, no external policy server to operate, and no translation layer between what you deploy and what you enforce.
This matters because the barrier to adopting policy enforcement has traditionally been high. OPA/Gatekeeper requires writing policies in Rego, a purpose-built logic language that is powerful but has a steep learning curve. Kyverno trades some of that raw expressiveness for immediate familiarity: if you can write a Kubernetes manifest, you can write a Kyverno policy.
How Kyverno works
When you install Kyverno (typically via a single Helm chart), it registers two webhook configurations with the Kubernetes API server: a MutatingWebhookConfiguration and a ValidatingWebhookConfiguration. From that point, every resource creation, update, or deletion that matches a policy triggers an AdmissionReview request to Kyverno.
The flow:
- A user or controller submits a resource to the API server.
- The API server routes the request through its admission chain.
- Kyverno's mutating webhook fires first. Mutate rules add defaults, inject sidecars, or rewrite fields.
- Kyverno's validating webhook fires next. Validate rules check the (now possibly mutated) resource against policy conditions. If a rule fails and the policy is set to
Enforce, the request is rejected with a clear error message. - Generate rules create new dependent resources (a NetworkPolicy, a ResourceQuota) in the background after a resource is admitted.
- Image verification rules check container image signatures and attestations against known keys or certificates.
The components
Kyverno deploys as multiple controllers, each with a distinct responsibility:
- Admission controller receives AdmissionReview requests and processes validate, mutate, and image verification rules. It also contains the webhook controller that dynamically configures which resources the webhooks intercept, based on the policies currently installed.
- Background controller handles generate rules and mutate-existing rules. These do not run inline with the admission request but reconcile asynchronously.
- Reports controller aggregates results from admission events and periodic background scans into PolicyReport and ClusterPolicyReport resources, following the Kubernetes Policy WG reporting standard.
Policy scope: ClusterPolicy vs. Policy
A ClusterPolicy applies to resources across all namespaces. A Policy is namespaced and applies only within the namespace where it is created. This mirrors the Role/ClusterRole distinction in Kubernetes RBAC.
Audit vs. Enforce
Every validation rule has a failureAction (or the older validationFailureAction field). Set it to Audit and Kyverno logs violations in PolicyReport resources without blocking anything. Set it to Enforce and violations are rejected at admission time. This two-phase approach lets you roll policies out safely: deploy in Audit mode, review the reports, fix violations, then switch to Enforce.
What Kyverno does: the four rule types
Validate: reject resources that break your standards
Validation is the most common use case. You define conditions that resources must meet, and Kyverno blocks or reports violations.
This policy requires every Deployment to carry an app.kubernetes.io/name label:
# Kyverno 1.17+ (kyverno.io/v1 ClusterPolicy syntax)
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-app-label
spec:
validationFailureAction: Enforce
rules:
- name: check-app-label
match:
any:
- resources:
kinds:
- Deployment
validate:
message: "Every Deployment must have an 'app.kubernetes.io/name' label."
pattern:
metadata:
labels:
app.kubernetes.io/name: "?*" # any non-empty value
The "?*" pattern means "at least one character." A Deployment without this label gets rejected with the message you defined.
Another common validation: disallowing privileged containers. This policy implements the Pod Security Standards Baseline profile:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-privileged-containers
annotations:
policies.kyverno.io/category: Pod Security Standards (Baseline)
spec:
validationFailureAction: Enforce
rules:
- name: deny-privileged
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Privileged containers are not allowed."
pattern:
spec:
containers:
- securityContext:
privileged: "!true" # must not be true
initContainers:
- securityContext:
privileged: "!true"
Kyverno ships a full policy library with ready-to-use policies that map to the Kubernetes Pod Security Standards at both the Baseline and Restricted profile levels.
Mutate: set defaults and inject configuration
Mutate rules modify resources before they are admitted. This is declarative and idempotent: the same mutation applied twice produces the same result.
This policy adds a cost-center: engineering label to every new Namespace that does not already have one:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-default-cost-center
spec:
rules:
- name: add-cost-center-label
match:
any:
- resources:
kinds:
- Namespace
mutate:
patchStrategicMerge:
metadata:
labels:
+(cost-center): engineering # the + prefix means "add if not present"
The +(key) syntax is a Kyverno convention: it only adds the field when it does not already exist. If the Namespace already has a cost-center label, the mutation skips it.
Sidecar injection follows the same principle. Instead of relying on a bespoke webhook (like Istio's sidecar injector), you define a Kyverno mutate policy that patches a container into the pod spec when a specific annotation is present.
Generate: auto-create resources when something happens
Generate rules create new Kubernetes resources in response to events. The canonical use case: whenever a new Namespace is created, automatically generate a deny-all NetworkPolicy and a default ResourceQuota.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: namespace-defaults
spec:
rules:
- name: generate-default-deny-networkpolicy
match:
any:
- resources:
kinds:
- Namespace
generate:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
name: default-deny
namespace: ""
data:
spec:
podSelector: {} # matches all pods in the namespace
policyTypes:
- Ingress
- Egress # no allow rules = deny all
This gives every new namespace a network-level zero-trust starting point. Teams then add their own NetworkPolicies to open the specific traffic their workloads need.
Verify images: supply chain security
Kyverno can verify that container images were signed with a known key before they run in the cluster. This uses the Sigstore cosign format.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: verify-image-signatures
spec:
validationFailureAction: Enforce
rules:
- name: verify-cosign-signature
match:
any:
- resources:
kinds:
- Pod
verifyImages:
- imageReferences:
- "registry.yourcompany.nl/production/*"
attestors:
- entries:
- keys:
publicKeys: |-
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE...
-----END PUBLIC KEY-----
If an image in the registry.yourcompany.nl/production/ path was not signed with the matching private key, the pod is rejected. Kyverno also supports keyless verification via Fulcio certificates and can validate in-toto attestations for SLSA provenance data.
Image verification results are cached with a configurable TTL, so the network overhead of checking signatures does not hit every pod creation.
The evolution toward CEL
Kyverno 1.17 (released February 2026) promoted a new generation of policy types to stable (v1): ValidatingPolicy, MutatingPolicy, GeneratingPolicy, ImageValidatingPolicy, and DeletingPolicy. These use Common Expression Language (CEL) instead of Kyverno's original JMESPath-based pattern matching.
Why CEL? Kubernetes itself adopted CEL for ValidatingAdmissionPolicies starting in Kubernetes 1.26. By aligning with CEL, Kyverno policies can convert directly to native Kubernetes admission policies, reducing lock-in.
The legacy ClusterPolicy and Policy types (using JMESPath) still work in Kyverno 1.17 but are officially deprecated and will be removed in a future release. The YAML examples in this article use the legacy syntax because it remains the most widely deployed and documented format, but new installations should evaluate the CEL-based types.
How Kyverno compares to OPA/Gatekeeper
Both Kyverno and OPA/Gatekeeper solve the same core problem: enforcing policies on Kubernetes resources at admission time. Both are mature CNCF projects (OPA graduated in 2021, Kyverno in 2026). The choice between them depends on your team, your stack, and what you need beyond validation.
| Criterion | Kyverno | OPA/Gatekeeper |
|---|---|---|
| Policy language | YAML (JMESPath or CEL) | Rego |
| Learning curve | Low if you know Kubernetes YAML | Steeper; Rego is a purpose-built language |
| Kubernetes-native | Policies are CRDs; kubectl apply |
ConstraintTemplates + Constraints (CRDs, but Rego embedded) |
| Validation | Yes | Yes |
| Mutation | Yes (built-in) | Limited (alpha since Gatekeeper 3.10) |
| Resource generation | Yes | No |
| Image verification | Yes (cosign, Sigstore) | No (requires external tooling) |
| Scope beyond Kubernetes | Expanding (JSON/YAML validation for IaC) | Broad (API authorization, CI pipelines, microservices) |
| CNCF status | Graduated (March 2026) | OPA graduated (January 2021); Gatekeeper is a subproject |
Choose Kyverno when your policy needs are Kubernetes-focused, your team knows YAML better than they know a policy language, and you want mutation, generation, and image verification without bolting on extra tools.
Choose OPA/Gatekeeper when you need a single policy engine across your entire stack (not just Kubernetes), your team has invested in Rego, or your compliance requirements demand the fine-grained logic that Rego provides.
A hybrid approach is possible. Some organizations use Kyverno for straightforward Kubernetes admission policies (label enforcement, security contexts, defaults) and OPA for complex cross-system authorization decisions.
What Kyverno is not
Kyverno replaces neither RBAC nor network policies. It complements them.
- RBAC controls who can perform actions on Kubernetes resources. Kyverno controls what those resources look like when they are created. A user with permission to create Deployments (via RBAC) can still be blocked by Kyverno if the Deployment does not meet policy requirements. But Kyverno cannot restrict which users have access; that is RBAC's job.
- Network policies control traffic between pods. Kyverno can generate NetworkPolicy resources automatically, but it does not enforce network traffic itself.
- Pod Security Standards (the built-in Kubernetes admission controller via
PodSecurity) cover a fixed set of security checks. Kyverno can enforce the same checks plus anything else you define, but it runs as an external webhook, not as a built-in admission plugin.
One important edge case: the system:masters group in Kubernetes bypasses all admission webhooks, including Kyverno. Policies do not apply to identities in that group.
Where to go next
- For controlling who can do what in your cluster, read Kubernetes RBAC: role-based access control for clusters.
- The Kyverno policy library has ready-to-use policies organized by category (Pod Security Standards, best practices, supply chain security).
- The official Kyverno installation guide covers Helm-based deployment and high-availability configuration.