What you will have at the end
A working ConfigMap setup where configuration changes reach your running pods without manual intervention. You will understand which consumption method fits which scenario, how to implement file-based hot-reload correctly, and when to reach for Stakater Reloader or immutable ConfigMaps instead.
Prerequisites
kubectlv1.21 or later (immutable ConfigMaps reached GA in v1.21)- A running Kubernetes cluster (any distribution: EKS, GKE, AKS, kind, minikube)
- Familiarity with Deployments, Pods, and volumes (if PersistentVolumes are still hazy, read PersistentVolumes and PersistentVolumeClaims explained first)
helmv3 if you plan to install Stakater Reloader
Create a ConfigMap
A ConfigMap is a Kubernetes API object (apiVersion: v1, kind: ConfigMap) that stores non-confidential key-value data. Two optional fields hold the data: data for UTF-8 strings and binaryData for base64-encoded binary content. Keys in both fields must not overlap.
The total size of a ConfigMap is capped at 1 MiB, a limit inherited from etcd's request size constraints.
Declarative YAML
# configmap-app.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: order-service-config
namespace: production
data:
LOG_LEVEL: "info"
MAX_RETRIES: "3"
application.yaml: |
server:
port: 8080
read-timeout: 30s
database:
pool-size: 25
idle-timeout: 300s
kubectl apply -f configmap-app.yaml
Imperative creation
# From literal key-value pairs
kubectl create configmap order-service-config \
--from-literal=LOG_LEVEL=info \
--from-literal=MAX_RETRIES=3
# From a file (the filename becomes the key)
kubectl create configmap nginx-config \
--from-file=nginx.conf
# From an .env file (KEY=VALUE per line)
kubectl create configmap env-config \
--from-env-file=.env.production
One gotcha with --from-env-file: keys that do not match the pattern [A-Za-z_][A-Za-z0-9_]* are silently skipped. No error, no warning.
Four ways to consume a ConfigMap in a pod
Each method has different update semantics. Picking the wrong one is the most common source of "I updated the ConfigMap but nothing happened."
Pattern 1: individual environment variables
containers:
- name: order-service
image: registry.internal/order-service:4.2.1
env:
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: order-service-config
key: LOG_LEVEL
optional: true # pod starts even if the key is absent
Values are resolved once, at pod creation time. Running pods never see updates. A pod restart is required.
Pattern 2: all keys as environment variables (envFrom)
containers:
- name: order-service
image: registry.internal/order-service:4.2.1
envFrom:
- configMapRef:
name: order-service-config
Same update behavior as pattern 1: resolved at pod creation, never propagated to running pods. Keys that do not match the environment variable naming pattern are silently skipped.
Pattern 3: volume mount (the hot-reload path)
spec:
containers:
- name: order-service
image: registry.internal/order-service:4.2.1
volumeMounts:
- name: config-vol
mountPath: /etc/config
readOnly: true
volumes:
- name: config-vol
configMap:
name: order-service-config
items: # optional: select specific keys
- key: application.yaml
path: application.yaml
Files in the mounted volume are updated automatically by the kubelet within its sync period. The default is roughly 60 seconds, though the actual delay ranges from near-zero to about 90 seconds depending on kubelet cache configuration and jitter. Your application must actively re-read the file to pick up the change.
If items is specified, only the listed keys become files. If omitted, every key becomes a file named after the key.
Pattern 4: Kubernetes API watch
Application code uses a Kubernetes client library to GET or WATCH the ConfigMap resource directly. This gives near-zero latency on updates and works cross-namespace (given appropriate RBAC), but it adds complexity and a direct API server dependency. Most applications do not need it.
Update propagation summary
| Consumption method | Auto-updates running pods? | Typical delay | Application action needed? |
|---|---|---|---|
env / configMapKeyRef |
No | Never | Pod restart |
envFrom / configMapRef |
No | Never | Pod restart |
Volume mount (no subPath) |
Yes | 10 to 90 seconds | Re-read file |
Volume mount (with subPath) |
No | Never | Pod restart |
Kubernetes API WATCH |
Yes | Near-zero | Programmatic watch |
How volume updates actually work: the atomic symlink
Understanding this mechanism is non-optional if you are implementing file-based hot-reload.
When a ConfigMap is mounted as a volume, the kubelet does not write files directly. It creates a layered symlink structure:
/etc/config/
├── ..2026_04_09_10_00_00.123456789/ # timestamped directory with real files
│ └── application.yaml
├── ..data -> ..2026_04_09_10_00_00.123456789/ # symlink (swapped atomically)
└── application.yaml -> ..data/application.yaml # user-visible symlink
When the ConfigMap changes, the kubelet creates a new timestamped directory, atomically swaps the ..data symlink, and deletes the old one. The update is atomic and consistent: you never see a half-updated config.
The consequence for file watchers: the files at the user-visible path are not modified in place. The symlink target changes. Standard inotify watchers following individual files receive IN_DELETE_SELF, not IN_MODIFY or IN_CLOSE_WRITE. This confuses many hot-reload implementations.
Correct file-watch implementation
- Watch the directory (
/etc/config/) or the..datasymlink, not individual file paths - Handle
IN_DELETE_SELFas a config-updated event, not an error - Re-establish the watch after a deletion event (the old target is gone)
- Do not use
IN_DONT_FOLLOW - Test on an actual Kubernetes cluster, not just a local filesystem (the symlink behavior differs)
Recommended libraries by language:
| Language | Library | Notes |
|---|---|---|
| Go | fsnotify |
Watch directory, not individual files |
| Node.js | chokidar |
Use stabilization delay (debounce) |
| Python | watchdog |
Watch directory with debouncing |
| Java | NIO WatchService or Spring @ConfigurationProperties refresh |
Directory-level watch |
The subPath trap
When a volume mount uses subPath, the file is mounted directly, bypassing the symlink chain. ConfigMap updates are never propagated to that file. This is a known, won't-fix limitation.
volumeMounts:
- name: config-vol
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf # auto-update is DISABLED for this mount
If you need a single file at a specific path and you also need live updates, mount the full directory to a separate path and configure your application to read from there. Or pair the subPath mount with Stakater Reloader, which will restart the pod on ConfigMap changes.
Automate restarts with Stakater Reloader
When configuration is consumed via environment variables (or subPath mounts), live updates do not reach running pods. Stakater Reloader automates the rolling restart that would otherwise require kubectl rollout restart.
Install with Helm
helm repo add stakater https://stakater.github.io/stakater-charts
helm install reloader stakater/reloader \
-n reloader --create-namespace
Annotate your workload
Pick the annotation that matches your granularity:
# Option A: restart on any ConfigMap or Secret change this workload references
metadata:
annotations:
reloader.stakater.com/auto: "true"
# Option B: restart only on changes to specific named ConfigMaps
metadata:
annotations:
configmap.reloader.stakater.com/reload: "order-service-config,feature-flags"
Reloader supports Deployments, StatefulSets, DaemonSets, Argo Rollouts, and CronJobs. For Argo Rollouts, add reloader.stakater.com/rollout-strategy: "restart" to avoid GitOps drift that the default annotation-patching strategy can cause with ArgoCD or Flux.
What Reloader does not do. It triggers a rolling restart (new pods). It does not perform in-process config reload. Applications still experience a brief rollover. For latency-sensitive workloads where zero-restart reload matters, use volume mounts with an application-side file watcher instead.
Immutable ConfigMaps
Marking a ConfigMap as immutable has two benefits: it prevents accidental in-place modification and it reduces API server and etcd load because the kubelet stops watching immutable resources.
apiVersion: v1
kind: ConfigMap
metadata:
name: order-service-config-v3
immutable: true # cannot be reversed once set
data:
LOG_LEVEL: "info"
MAX_RETRIES: "3"
The immutable field reached GA in Kubernetes v1.21 (April 2021). Once set to true, it cannot be reverted. Any attempt to modify data or binaryData is rejected by the API server. To update the configuration, create a new ConfigMap with a new name and update the Deployment to reference it:
# Create the new version
kubectl create configmap order-service-config-v4 \
--from-literal=LOG_LEVEL=debug \
--from-literal=MAX_RETRIES=5
# Update the Deployment (triggers a rolling update)
kubectl set env deployment/order-service \
--from=configmap/order-service-config-v4
# Clean up after rollout completes
kubectl rollout status deployment/order-service
kubectl delete configmap order-service-config-v3
Kustomize configMapGenerator: immutability without manual naming
Kustomize's configMapGenerator appends a content hash to the ConfigMap name automatically. Every content change produces a new name, which updates the pod spec and triggers a rolling update. No manual versioning needed.
# kustomization.yaml
configMapGenerator:
- name: order-service-config
files:
- application.yaml
literals:
- LOG_LEVEL=info
This generates a ConfigMap named something like order-service-config-g4h8m2k7. Change application.yaml and run kubectl apply -k . again: a new hash, a new ConfigMap, a new rollout. This pairs naturally with Kustomize overlays for managing per-environment configuration.
ConfigMap vs. Secret: when to use which
A common misconception is that ConfigMaps and Secrets are equivalent except for base64 encoding. They are not.
| Aspect | ConfigMap | Secret |
|---|---|---|
| Intended data | Non-confidential configuration | Passwords, tokens, certificates |
| etcd storage | Plaintext | Plaintext by default; encryption at rest available |
| RBAC | Standard | More granular; list/watch grants access to all Secret data in a namespace |
| Size limit | 1 MiB | 1 MiB |
Base64 encoding in the Secret data field is for binary safety, not encryption. Secrets are not encrypted at rest by default; an administrator must explicitly enable EncryptionConfiguration.
The rule of thumb: if the value would be a problem in a public git commit, use a Secret. Feature flags, log levels, timeout values, and file paths belong in a ConfigMap.
When you hit the 1 MiB limit
ConfigMaps are stored in etcd, which has a default maximum request size of 1.5 MiB. Kubernetes enforces 1 MiB to leave headroom. If your configuration grows past that:
- Split into multiple ConfigMaps. Mount each separately. Works for moderate overages.
- External configuration. HashiCorp Vault, AWS AppConfig, Azure App Configuration. These support versioning, audit trails, and access controls that ConfigMaps lack.
- Object storage + init container. Store large config files in S3 or GCS and download them during pod startup.
As a practical baseline: keep individual ConfigMaps under 100 to 200 KiB. Smaller ConfigMaps reduce etcd pressure, speed up pod startup, and simplify debugging.
Verify the final result
After applying your ConfigMap and Deployment, confirm everything works:
# Verify the ConfigMap exists and contains expected data
kubectl get configmap order-service-config -n production -o yaml
# Verify the pod sees the mounted config
kubectl exec -n production deploy/order-service -- \
cat /etc/config/application.yaml
# Update the ConfigMap and confirm propagation (volume mount)
kubectl edit configmap order-service-config -n production
# Wait ~60-90 seconds, then:
kubectl exec -n production deploy/order-service -- \
cat /etc/config/application.yaml
# The output should reflect the updated values
For Reloader-based setups, the pod should restart automatically after the ConfigMap edit. Check with:
kubectl get pods -n production -w
# You should see the old pod terminating and a new one starting
Common troubleshooting
ConfigMap update is not reaching the pod. Check the consumption method. If using env or envFrom, updates require a pod restart. If using a volume mount with subPath, updates are never propagated. See the update propagation summary.
File watcher triggers IN_DELETE_SELF instead of IN_MODIFY. Expected behavior. The kubelet swaps a symlink, it does not edit files in place. See the atomic symlink section. Watch the directory, not individual files.
Pod fails to start with "configmap not found". ConfigMaps are namespace-scoped. The ConfigMap must be in the same namespace as the pod. Check with kubectl get configmap -n <namespace>. If the ConfigMap genuinely does not exist and optional: true is not set on the reference, the pod will stay in ContainerCreating.
Reloader is installed but pods are not restarting. Verify the annotation is on the Deployment (not the Pod). Check Reloader logs: kubectl logs -n reloader deploy/reloader-reloader.
ConfigMap exceeds 1 MiB. Split the data across multiple ConfigMaps or move large blobs to external storage. The API server rejects any ConfigMap over 1 MiB at admission time.