What you will have at the end
A Deployment manifest with one or more init containers that run setup tasks (waiting for a dependency, running database migrations, fetching configuration, fixing filesystem permissions) before the main application container starts. You will also know how to debug failing init containers and when to use sidecar containers instead.
Prerequisites
- A running Kubernetes cluster (v1.28 or later if you want sidecar container support; any v1.6+ cluster for regular init containers)
kubectlconfigured and able to reach the cluster- Familiarity with Deployment and Pod specs
- For resource sizing on init containers, see Kubernetes resource requests and limits
What init containers do
Init containers are specialized containers declared in the initContainers array of a pod spec. They run sequentially to completion before any application container starts. Each init container must exit with code 0 before the next one begins. If any init container fails, the kubelet retries it according to the pod's restartPolicy.
Three properties make them useful:
- Different image. An init container can use
busybox,mysql-client, orvaultwithout bloating the application image. - Blocking guarantee. The main containers do not start until every init container succeeds. No race conditions.
- Sequential ordering. Init containers execute in declaration order, so each one can depend on the side effects of the previous one.
Init containers do not support livenessProbe, readinessProbe, or startupProbe. They run to completion; they are not long-lived processes.
Wait for a dependency
Docker Compose uses depends_on: condition: service_healthy to gate container startup on another service. Kubernetes has no equivalent field at the pod spec level. If you are migrating from Docker Compose, see the full migration tutorial for the broader context.
An init container with a retry loop fills this gap:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-server
spec:
replicas: 1
selector:
matchLabels:
app: api-server
template:
metadata:
labels:
app: api-server
spec:
initContainers:
- name: wait-for-postgres
image: busybox:1.37
command:
- sh
- -c
- |
until nc -z postgres-service 5432; do
echo "Waiting for PostgreSQL on postgres-service:5432..."
sleep 2
done
containers:
- name: api
image: mycompany/api-server:3.2.0
ports:
- containerPort: 8080
nc -z attempts a TCP connection without sending data. Once PostgreSQL accepts connections, the init container exits 0 and the api container starts.
For DNS-based checks (waiting for a Service to exist in the cluster), use nslookup:
until nslookup postgres-service.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do
echo "Waiting for DNS..."
sleep 2
done
This only works within a pod. Cross-pod startup ordering (making sure Pod A is Running before Pod B is created) requires external orchestration: Helm hooks, Argo CD sync waves, or application-level retry logic.
Run database migrations
An init container can run your migration tool before the application boots, guaranteeing the schema is correct at startup:
initContainers:
- name: db-migrate
image: mycompany/api-migrations:3.2.0 # dedicated migration image
command: ["./migrate", "--target=latest"]
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
resources:
requests:
cpu: "250m"
memory: "256Mi"
limits:
memory: "512Mi"
The concurrency trap. When a Deployment scales to multiple replicas, every pod runs its init containers independently. Three replicas means three simultaneous migration runs hitting the same database. Migration tools like Atlas, Flyway, and Liquibase use advisory locks to make this safe, but not all tools do. If your migration tool does not support locking, use a Kubernetes Job with parallelism: 1 instead. Jobs also give you better log retention and decoupled rollback.
Add a timeout to protect against hanging migrations:
timeout 300 ./migrate --target=latest
Without a timeout, a stuck migration blocks pod startup indefinitely. The pod shows Init:0/1 forever.
Fetch configuration from an external source
Init containers can pull configuration from Vault, AWS Secrets Manager, or a remote config service and write it to a shared volume. The application container reads from that volume at startup.
initContainers:
- name: fetch-config
image: vault:1.18
command:
- sh
- -c
- |
vault login -method=kubernetes role=api-server
vault kv get -field=config secret/api-server > /config/app.conf
env:
- name: VAULT_ADDR
value: "https://vault.internal:8200"
volumeMounts:
- name: config-vol
mountPath: /config
containers:
- name: api
image: mycompany/api-server:3.2.0
volumeMounts:
- name: config-vol
mountPath: /config
readOnly: true
volumes:
- name: config-vol
emptyDir: {}
For sensitive data, use emptyDir with medium: Memory so the config never touches disk:
volumes:
- name: config-vol
emptyDir:
medium: Memory # RAM-backed tmpfs
Native Kubernetes Secrets are base64-encoded, not encrypted at rest by default. For multi-tenant clusters or compliance-sensitive workloads, fetching secrets from a dedicated vault through an init container is a stronger pattern.
Share data via emptyDir volumes
The emptyDir volume is the standard mechanism for passing data from an init container to the application container. The lifecycle is straightforward:
- Declare an
emptyDirvolume in the pod spec. - Mount it in both the init container and the application container.
- The init container writes files. It exits.
- The application container reads those files.
- The volume is destroyed when the pod terminates.
Because init containers finish before application containers start, there is a built-in timing guarantee: the application never reads a half-written file.
Complete example that downloads content into an nginx container:
apiVersion: v1
kind: Pod
metadata:
name: content-init-demo
spec:
initContainers:
- name: download-content
image: busybox:1.37
command:
- wget
- "-O"
- "/work-dir/index.html"
- "http://info.cern.ch"
volumeMounts:
- name: workdir
mountPath: /work-dir
containers:
- name: nginx
image: nginx:1.27
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
volumes:
- name: workdir
emptyDir: {}
Source: Kubernetes configure pod initialization tutorial
Fix filesystem permissions
Init containers can run as root to fix ownership or permissions on a volume, even when the application container runs as a non-root user:
initContainers:
- name: fix-permissions
image: busybox:1.37
command: ['sh', '-c', 'chown -R 1000:1000 /data']
volumeMounts:
- name: data-vol
mountPath: /data
securityContext:
runAsUser: 0 # root, only during init
containers:
- name: app
image: mycompany/api-server:3.2.0
securityContext:
runAsUser: 1000
runAsNonRoot: true
volumeMounts:
- name: data-vol
mountPath: /data
Root access is confined to the init container. The application container runs unprivileged.
Ordering and failure handling
Init containers execute in declaration order. No parallelism. The kubelet waits for each to exit 0 before starting the next.
Pod status reflects init container progress:
| Status | Meaning |
|---|---|
Init:0/3 |
No init containers have completed |
Init:1/3 |
First completed, second running |
Init:Error |
An init container exited non-zero |
Init:CrashLoopBackOff |
An init container is repeatedly failing |
What happens on failure depends on the pod's restartPolicy:
restartPolicy |
Behavior on init container failure |
|---|---|
Always (Deployment default) |
Kubelet restarts the failed init container with exponential backoff (10s, 20s, 40s, up to 5 min) |
OnFailure |
Same restart with backoff |
Never |
Pod is marked Failed, no restart |
The kubelet only restarts the failed init container, not the entire sequence. Init containers that already succeeded do not re-run.
A pod stuck in Init:CrashLoopBackOff is different from one stuck in ContainerCreating. The ContainerCreating state means init containers have already passed and the kubelet is setting up the main container's prerequisites (volumes, network). If you see ContainerCreating, see debugging pods stuck in ContainerCreating.
Debugging failing init containers
Start with kubectl describe pod to see Events and init container status:
kubectl describe pod api-server-7f8b4c5d6-x9k2m
Look for the Init Containers: section in the output. It shows each init container's state, exit code, and restart count.
Read logs from a specific init container:
kubectl logs api-server-7f8b4c5d6-x9k2m -c wait-for-postgres
If the init container has already crashed and restarted, read the previous run's logs:
kubectl logs api-server-7f8b4c5d6-x9k2m -c wait-for-postgres --previous
Add set -x to shell-based init containers for verbose command tracing:
command:
- sh
- -c
- |
set -x # prints every command before execution
until nc -z postgres-service 5432; do
sleep 2
done
Source: Kubernetes debug init containers guide
Resource requests and init container scheduling
The Kubernetes scheduler calculates effective pod resource requirements as:
effective request = max(highest single init container request, sum of all app container requests)
This means a migration init container requesting 4 Gi of memory inflates the pod's scheduling footprint even though that memory is only needed for a few seconds at startup. The scheduler reserves enough capacity for whichever phase (init or run) is larger.
Right-size init container resource requests. An init container that just runs nc -z does not need 512 Mi of memory. Overblown init container requests can prevent pods from scheduling on smaller nodes.
For more on how requests and limits affect scheduling, QoS class assignment, and overcommit strategy, see Kubernetes resource requests and limits.
Sidecar containers vs. init containers
Kubernetes 1.28 introduced native sidecar containers as a special form of init container. The feature reached stable (GA) in Kubernetes 1.33. The feature gate SidecarContainers is enabled by default since Kubernetes 1.29.
A sidecar is an init container with restartPolicy: Always:
initContainers:
- name: log-shipper
image: fluentd:v1.17
restartPolicy: Always # this makes it a sidecar
volumeMounts:
- name: log-vol
mountPath: /app/logs
containers:
- name: app
image: mycompany/api-server:3.2.0
volumeMounts:
- name: log-vol
mountPath: /app/logs
volumes:
- name: log-vol
emptyDir: {}
The behavioral differences are significant:
| Behavior | Regular init container | Sidecar (restartPolicy: Always) |
|---|---|---|
| Lifetime | Exits before app starts | Runs for the entire pod lifetime |
| Blocks next container | Until it exits 0 | Until its startupProbe passes (if defined) |
| Probe support | None | livenessProbe, readinessProbe, startupProbe |
| Resource accounting | max(init, app) | Added to app container sum |
| Job completion | N/A | Does not block Job completion |
Use a regular init container for one-time setup: dependency waiting, migrations, config fetching, permission fixes. Use a sidecar for processes that must run alongside the application: log shippers, metrics exporters, service mesh proxies (Envoy, Linkerd).
Before native sidecar support, running a log shipper as a regular container in a Job was problematic: the shipper kept the pod alive after the Job's main container exited. Native sidecars do not block Job completion.
Verify the result
After applying a Deployment with init containers:
# Watch pod startup
kubectl get pods -l app=api-server -w
# Expected: Init:0/1 -> Init:1/1 -> Running
# Confirm init container completed
kubectl describe pod <pod-name>
# Look for "Init Containers:" section; state should be "Terminated: Completed"
# Check application logs to confirm it started with correct config/schema
kubectl logs <pod-name> -c api
Common troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
Pod stuck in Init:0/1 forever |
Init container waiting for a service that does not exist or is not ready | Check the target service with kubectl get svc and kubectl get endpoints |
Init:CrashLoopBackOff |
Init container command exits non-zero | Read logs with kubectl logs <pod> -c <init-name> and fix the command or image |
| Multiple replicas cause duplicate migrations | Migration tool does not use advisory locks | Switch to a Job with parallelism: 1 or use a locking-aware tool |
| Pod unschedulable after adding init container | Init container resource request is too large for available nodes | Right-size the init container's resources.requests |
| Init container works locally but fails in cluster | Image pull error or wrong registry credentials | Check Events in kubectl describe pod for ImagePullBackOff |