What you will have at the end
A Kustomize project with a shared base and per-environment overlays (dev, staging, production) that you can apply with a single kubectl apply -k command. Each overlay customizes replicas, images, resource limits, and config without modifying the base manifests.
Prerequisites
kubectlv1.21 or later (Kustomize support was updated from v2.0.3 to v4.0.5 in v1.21; earlier versions lack features used in this guide)- A Kubernetes cluster to apply against (any distribution; a local kind or minikube cluster works)
- Familiarity with Kubernetes Deployments, Services, ConfigMaps, and namespaces
- Optional: the standalone
kustomizebinary for CI/CD pipelines that need features newer than what yourkubectlbundles
When to use Kustomize instead of Helm
Kustomize and Helm solve related but different problems. A quick decision framework:
| Scenario | Recommended tool |
|---|---|
| Install an upstream chart (nginx-ingress, cert-manager) | Helm |
| Distribute your app to external teams or customers | Helm |
| Multi-environment config for an internal app | Kustomize |
| Conditional YAML structure (if/else logic in manifests) | Helm |
| Plain, auditable YAML in Git without template syntax | Kustomize |
| Team unfamiliar with Go templates | Kustomize |
The two are not mutually exclusive. Mature platform teams often use Helm to consume upstream charts and Kustomize to adapt those deployments per environment. For a deep dive on the Helm side, see my guide on Helm chart best practices.
Set up the base and overlay directory structure
The canonical layout separates shared manifests (the base) from environment-specific changes (overlays):
k8s/
├── base/
│ ├── kustomization.yaml
│ ├── deployment.yaml
│ └── service.yaml
└── overlays/
├── dev/
│ ├── kustomization.yaml
│ └── replica-patch.yaml
├── staging/
│ └── kustomization.yaml
└── prod/
├── kustomization.yaml
└── resource-limits-patch.yaml
Create the base
The base is a standalone, valid Kustomization that has no knowledge of the overlays referencing it:
# k8s/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
labels:
- includeSelectors: true
includeTemplates: true
pairs:
app.kubernetes.io/name: "inventory-api"
# k8s/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory-api
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: inventory-api
template:
metadata:
labels:
app.kubernetes.io/name: inventory-api
spec:
containers:
- name: app
image: registry.example.com/inventory-api:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
# k8s/base/service.yaml
apiVersion: v1
kind: Service
metadata:
name: inventory-api
spec:
selector:
app.kubernetes.io/name: inventory-api
ports:
- port: 80
targetPort: 8080
You can apply the base directly with kubectl apply -k k8s/base/. That is an explicit Kustomize design goal: base files are always valid Kubernetes YAML, never templates.
Create an overlay
Each overlay references the base and layers environment-specific changes on top:
# k8s/overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namespace: production
namePrefix: prod-
replicas:
- name: inventory-api
count: 5
images:
- name: registry.example.com/inventory-api
newTag: v1.4.2 # pin to a release tag in prod
patches:
- path: resource-limits-patch.yaml
target:
kind: Deployment
name: inventory-api
# k8s/overlays/prod/resource-limits-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory-api
spec:
template:
spec:
containers:
- name: app
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: "1"
memory: 1Gi
Apply an overlay
# Preview the rendered manifests
kubectl kustomize k8s/overlays/prod/
# Apply to the cluster
kubectl apply -k k8s/overlays/prod/
Expected output (abbreviated):
namespace/production configured
service/prod-inventory-api created
deployment.apps/prod-inventory-api created
Verify the Deployment landed with the right replica count:
kubectl get deployment prod-inventory-api -n production -o jsonpath='{.spec.replicas}'
# Expected: 5
Choose a patching strategy
Kustomize supports two patch mechanisms, both accessed through the unified patches field. The older patchesStrategicMerge and patchesJson6902 fields are deprecated in Kustomize v5.
Strategic merge patches
A strategic merge patch looks like a partial copy of the resource you are modifying. Kustomize merges it using Kubernetes merge strategies (lists of containers merge by name, not replaced wholesale).
Use strategic merge when you are adding or modifying fields and the patch shape is intuitive to read:
# Inline strategic merge patch in kustomization.yaml
patches:
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory-api
spec:
template:
spec:
containers:
- name: app
env:
- name: LOG_LEVEL
value: "debug"
target:
kind: Deployment
name: inventory-api
JSON 6902 patches
RFC 6902 patches use explicit operations (add, remove, replace, move, copy, test) with JSON Pointer paths. More verbose, but the only clean way to remove a field or target a list item by index:
patches:
- patch: |-
- op: replace
path: /spec/template/spec/containers/0/resources/limits/memory
value: "512Mi"
- op: remove
path: /spec/template/spec/containers/0/livenessProbe
target:
group: apps
version: v1
kind: Deployment
name: inventory-api
Target selectors
The target block supports regex and label matching, so a single patch can apply to multiple resources:
target:
kind: Deployment
name: "inventory-.*" # regex
labelSelector: "tier=backend"
Generate ConfigMaps and Secrets with automatic rollouts
The configMapGenerator and secretGenerator create resources from files, env files, or literal values. The most important feature is the hash suffix appended to the generated name.
# k8s/overlays/prod/kustomization.yaml (add to existing)
configMapGenerator:
- name: app-config
literals:
- LOG_LEVEL=warn
- MAX_CONNECTIONS=50
envs:
- .env.prod # KEY=VALUE pairs from file
The generated ConfigMap gets a name like app-config-9m5b4c7f. When the content changes, the hash changes. Kustomize rewrites every reference to the ConfigMap in your Deployment spec, so the Deployment spec itself changes and Kubernetes triggers a rolling update automatically. Config changes propagate to pods without manual kubectl rollout restart.
Reference the generator name (without the hash) in your base Deployment:
# k8s/base/deployment.yaml (container spec)
envFrom:
- configMapRef:
name: app-config # Kustomize rewrites this to app-config-<hash>
If you need a stable name (for example, a DaemonSet that handles its own config reload), disable the suffix:
configMapGenerator:
- name: stable-config
literals:
- KEY=value
options:
disableNameSuffixHash: true # pods will NOT auto-restart on config change
Secrets: do not commit plaintext
secretGenerator works identically to configMapGenerator but produces base64-encoded Secrets. Base64 is encoding, not encryption. Never commit literals: [password=mysecret] to Git.
For production GitOps, use one of these approaches:
- SOPS + age encryption via the KSOPS plugin: encrypts only the values, keys stay readable for diffs. Flux decrypts SOPS natively.
- External Secrets Operator: syncs secrets from AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault into Kubernetes Secrets. Your kustomization references ExternalSecret CRDs, not actual values.
- Sealed Secrets: encrypt with the cluster's public key (Bitnami controller). The SealedSecret resource can be committed safely.
Use components for cross-cutting features
Kustomize components (introduced in v3.7.0) solve the problem of N optional features across M environments. Without components, you end up duplicating patch files across overlays.
A component uses kind: Component and is not independently deployable. It acts on the resources of whatever parent Kustomization includes it:
# k8s/components/prometheus-metrics/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
resources:
- service-monitor.yaml # adds a ServiceMonitor
patches:
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory-api
spec:
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
target:
kind: Deployment
name: inventory-api
Include it in overlays that need it:
# k8s/overlays/prod/kustomization.yaml
components:
- ../../components/prometheus-metrics
- ../../components/network-policy
# k8s/overlays/dev/kustomization.yaml
components:
- ../../components/prometheus-metrics
# no network-policy in dev
Good use cases for components: Prometheus scraping annotations, sidecar injection, HPA configuration, NetworkPolicy, and feature flags via ConfigMap patches.
Integrate with ArgoCD
ArgoCD has native Kustomize support. Point an Application to a directory containing kustomization.yaml and ArgoCD auto-detects and runs kustomize build before applying:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: inventory-api-prod
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/my-org/inventory-api.git
targetRevision: HEAD
path: k8s/overlays/prod
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
For multi-environment setups, an ApplicationSet avoids repeating the spec per environment:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: inventory-api
spec:
generators:
- list:
elements:
- env: dev
cluster: https://dev.k8s.example.com
- env: staging
cluster: https://staging.k8s.example.com
- env: prod
cluster: https://prod.k8s.example.com
template:
metadata:
name: "inventory-api-{{env}}"
spec:
source:
repoURL: https://github.com/my-org/inventory-api.git
path: "k8s/overlays/{{env}}"
destination:
server: "{{cluster}}"
namespace: "{{env}}"
One thing to watch: ArgoCD's bundled Kustomize version may lag behind the standalone binary. If you need v5 features (the modern labels or replacements fields), configure a custom version in argocd-cm.
Integrate with Flux
Flux uses the Kustomization CRD from kustomize.toolkit.fluxcd.io to wrap Kustomize overlays in a Kubernetes-native workflow:
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
name: inventory-api
namespace: flux-system
spec:
interval: 1m
url: https://github.com/my-org/inventory-api.git
ref:
branch: main
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: inventory-api-prod
namespace: flux-system
spec:
interval: 10m
sourceRef:
kind: GitRepository
name: inventory-api
path: "./k8s/overlays/prod"
prune: true
targetNamespace: production
wait: true
Flux adds capabilities on top of plain Kustomize. Two that matter most:
SOPS decryption without a plugin: Flux's kustomize-controller decrypts SOPS-encrypted secrets natively:
spec:
decryption:
provider: sops
secretRef:
name: sops-age # Secret containing your age private key
Post-build variable substitution: resolve ${VARIABLE} tokens in manifests after kustomize build runs. This is a Flux feature, not native Kustomize:
spec:
postBuild:
substitute:
ENVIRONMENT: production
REGION: eu-west-1
substituteFrom:
- kind: ConfigMap
name: cluster-vars
Dependency ordering ensures infrastructure lands before applications:
spec:
dependsOn:
- name: infra-prod
Verify the complete project
After setting up the full structure, run a dry-run diff before applying to catch misconfigurations:
# Render and diff against the cluster state
kubectl diff -k k8s/overlays/prod/
If the diff looks right, apply:
kubectl apply -k k8s/overlays/prod/
Confirm all resources landed:
kubectl get all -n production -l app.kubernetes.io/name=inventory-api
Common troubleshooting
Immutable selector error on re-apply. If you used the deprecated commonLabels field, it mutates label selectors, which Kubernetes makes immutable after the first apply. Migrate to the modern labels field with includeSelectors: false, or delete and recreate the Deployment.
Config changes not picked up by pods. If you set disableNameSuffixHash: true on a generator, the Deployment spec does not change when the ConfigMap content changes. Either remove the flag to restore automatic rollouts, or add kubectl rollout restart deployment/inventory-api to your CI pipeline.
Build fails with "path not found." Kustomize resolves all paths relative to the kustomization.yaml file. Double-check that ../../base actually points to the base directory from the overlay's location.
Deprecation warnings in Kustomize v5. The fields patchesStrategicMerge, patchesJson6902, commonLabels, and vars are deprecated. Use patches, labels, and replacements instead. See the migration guide by Nick Janetakis.
kubectl-bundled Kustomize version mismatch. The Kustomize version inside kubectl may be older than the standalone binary. Run kubectl version --client and kustomize version to compare. For CI, install the standalone binary and use kustomize build | kubectl apply -f -.
Complete overlay example
For reference, here is the full prod overlay kustomization.yaml with all the features from this guide combined:
# k8s/overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namespace: production
namePrefix: prod-
replicas:
- name: inventory-api
count: 5
images:
- name: registry.example.com/inventory-api
newTag: v1.4.2
configMapGenerator:
- name: app-config
literals:
- LOG_LEVEL=warn
- MAX_CONNECTIONS=50
components:
- ../../components/prometheus-metrics
- ../../components/network-policy
patches:
- path: resource-limits-patch.yaml
target:
kind: Deployment
name: inventory-api