Table of contents
- What you will learn
- Assumed starting point
- How Docker Compose maps to Kubernetes
- Step 1: audit your docker-compose.yml
- Step 2: convert with Kompose
- Step 3: create a namespace and secrets
- Step 4: fix storage with proper PersistentVolumeClaims
- Step 5: add resource requests and limits
- Step 6: add health probes
- Step 7: replace depends_on with init containers
- Step 8: set the right deployment strategy
- Step 9: apply and verify
- Three misconceptions that trip people up
- Production hardening checklist
- What you learned
What you will learn
By the end of this tutorial, you will have converted a Docker Compose file into Kubernetes manifests and hardened them with resource limits, health probes, proper secrets, and a correct deployment strategy. You will understand why Kompose output is a starting point, not a finished product.
Assumed starting point
This tutorial assumes you have:
- A working
docker-compose.ymlusing Compose V3 format, with at least two services (a web application and a database) - A Kubernetes cluster with
kubectlconfigured and connected (Minikube, kind, or a cloud-managed cluster) kubectlversion 1.28 or later- Basic familiarity with YAML and the command line
- Container images already built and pushed to a registry (Kompose does not build images for you by default)
The examples use a WordPress + MySQL stack because it covers every migration challenge: volumes, secrets, dependency ordering, and health checks. The same principles apply to any multi-service Compose file.
How Docker Compose maps to Kubernetes
Before touching any tool, understand what translates and what does not. This table saves you from the most common surprises.
| Docker Compose concept | Kubernetes equivalent | Maps cleanly? |
|---|---|---|
services: |
Deployment + Service per service | Yes |
image: |
Container image in pod spec |
Yes |
ports: |
Service with port and targetPort |
Yes |
environment: |
env: in container spec (plaintext) |
Technically yes, but a security problem |
volumes: (named) |
PersistentVolumeClaim | Partially: needs StorageClass, access mode, size |
depends_on: |
Nothing. Use init containers | No |
networks: |
Ignored. Kubernetes uses flat networking | No |
build: |
Ignored. Pre-build your images | No |
restart: always |
restartPolicy: Always (default) |
Yes |
Source: Kompose conversion matrix
The fields that do not map cleanly are exactly the ones that cause production incidents when left unaddressed.
Step 1: audit your docker-compose.yml
Before running any conversion tool, walk through your Compose file and categorize every configuration value.
Take this typical WordPress stack:
# docker-compose.yml (Compose V3)
services:
wordpress:
image: wordpress:6.7-apache
ports:
- "80:80"
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: change-me-in-production # sensitive
WORDPRESS_DB_NAME: wordpress
volumes:
- wp-content:/var/www/html/wp-content
depends_on:
- db
db:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: root-change-me # sensitive
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: change-me-in-production # sensitive
volumes:
- db-data:/var/lib/mysql
volumes:
wp-content:
db-data:
Classify each element:
- Sensitive environment variables (
WORDPRESS_DB_PASSWORD,MYSQL_ROOT_PASSWORD,MYSQL_PASSWORD): will become Kubernetes Secrets - Non-sensitive environment variables (
WORDPRESS_DB_HOST,WORDPRESS_DB_NAME): will become a ConfigMap - Named volumes (
wp-content,db-data): will become PersistentVolumeClaims with explicit StorageClass and size depends_on: will need an init container on the WordPress Deploymentnetworks(not present here, but common): silently ignored by Kompose
Checkpoint: you can name every sensitive value, every volume, and every dependency in your Compose file before continuing.
Step 2: convert with Kompose
Kompose is an official Kubernetes incubator project that converts Docker Compose files into Kubernetes manifests.
Install it:
# Linux (Kompose v1.34.0)
curl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-linux-amd64 -o kompose
chmod +x kompose
sudo mv ./kompose /usr/local/bin/kompose
# macOS
brew install kompose
Run the conversion:
kompose convert -f docker-compose.yml
You will see warnings like these:
WARN Unsupported depends_on key - ignoring
WARN Unsupported networks key - ignoring
These warnings mean those fields are silently dropped from the output. Kompose is not broken; it is telling you the output is incomplete.
List what Kompose generated:
ls -la *.yaml
# Expected output:
# db-deployment.yaml
# db-service.yaml
# db-data-persistentvolumeclaim.yaml
# wordpress-deployment.yaml
# wordpress-service.yaml
# wp-content-persistentvolumeclaim.yaml
Do not apply these files yet. They need hardening first. The official Kubernetes documentation also says to "review and edit" before applying.
Checkpoint: you have six YAML files. None of them should be applied as-is.
Step 3: create a namespace and secrets
Start by creating a dedicated namespace and moving sensitive values out of plaintext.
kubectl create namespace wordpress
Create Secrets for all sensitive values identified in Step 1:
kubectl create secret generic wordpress-secrets \
--from-literal=db-password="$(openssl rand -base64 24)" \
--from-literal=db-root-password="$(openssl rand -base64 24)" \
--namespace=wordpress
Create a ConfigMap for non-sensitive configuration:
# wordpress-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: wordpress-config
namespace: wordpress
data:
WORDPRESS_DB_HOST: "db"
WORDPRESS_DB_NAME: "wordpress"
WORDPRESS_DB_USER: "wordpress"
kubectl apply -f wordpress-config.yaml
Now edit both Deployment files to reference these instead of plaintext env: values. In wordpress-deployment.yaml and db-deployment.yaml, replace every env: block that Kompose generated. For example, in the WordPress Deployment:
# Replace Kompose's plaintext env: block with this
envFrom:
- configMapRef:
name: wordpress-config
env:
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
And in the MySQL Deployment:
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-root-password
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
- name: MYSQL_DATABASE
value: "wordpress"
- name: MYSQL_USER
value: "wordpress"
Base64 encoding (used in Secret YAML manifests) is encoding, not encryption. For production GitOps workflows, look into Sealed Secrets or HashiCorp Vault.
Checkpoint: no sensitive values appear as plaintext in any YAML file.
Step 4: fix storage with proper PersistentVolumeClaims
Kompose creates PVC manifests, but they lack critical fields. The generated PVCs reference no StorageClass and no storage size. For a deeper understanding of PVs, PVCs, and StorageClasses, see Kubernetes PersistentVolumes and PersistentVolumeClaims.
First, check which StorageClasses your cluster offers:
kubectl get storageclass
# Example output on a cloud cluster:
# NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE
# standard (default) kubernetes.io/gce-pd Delete WaitForFirstConsumer
# premium-rwo pd.csi.storage.gke.io Delete WaitForFirstConsumer
Replace the generated PVC for the database with a properly specified one:
# db-data-persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: db-data
namespace: wordpress
spec:
accessModes:
- ReadWriteOnce # single pod read/write, correct for databases
storageClassName: standard # must match a StorageClass in your cluster
resources:
requests:
storage: 20Gi
Do the same for wp-content:
# wp-content-persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-content
namespace: wordpress
spec:
accessModes:
- ReadWriteOnce # fine for single-replica WordPress
storageClassName: standard
resources:
requests:
storage: 10Gi
If your StorageClass uses the hostPath provisioner (Minikube default), data is stored in a temp directory on the node and lost when the pod is rescheduled. That is development-only.
Checkpoint: run kubectl get storageclass and confirm your chosen StorageClass exists. Each PVC specifies an accessMode, a storageClassName, and a storage size.
Step 5: add resource requests and limits
Kompose generates no resources block. Without resource requests, the scheduler has no capacity guarantee for your pods. Without limits, a runaway container can consume an entire node. For the full mechanics, see Kubernetes resource requests and limits.
Add resource blocks to every container in every Deployment. These are starting values; tune them with kubectl top pods or VPA in recommendation mode once you have production data.
WordPress container:
resources:
requests:
cpu: "100m" # scheduling guarantee
memory: "128Mi"
limits:
cpu: "500m" # kernel throttles above this
memory: "256Mi" # kernel OOMKills above this
MySQL container:
resources:
requests:
cpu: "250m"
memory: "256Mi"
limits:
cpu: "1000m"
memory: "512Mi"
CPU throttles (the process slows down). Memory OOMKills (the process dies). Set memory limits conservatively but realistically. A limit that is too tight causes frequent restarts; one that is too generous wastes node capacity.
Checkpoint: every container in every Deployment has both requests and limits.
Step 6: add health probes
Kompose generates no health probes. Without them, Kubernetes cannot detect a deadlocked container, cannot route traffic away from unhealthy pods, and cannot safely verify new pods during rolling updates. For probe types, mechanisms, and timing, see how to configure Kubernetes health probes.
Add probes to the WordPress container:
# WordPress health probes
startupProbe: # gates liveness + readiness until boot completes
httpGet:
path: /wp-login.php
port: 80
failureThreshold: 30 # 30 x 10 = 300 seconds max startup window
periodSeconds: 10
livenessProbe: # restarts container if stuck
httpGet:
path: /wp-login.php
port: 80
periodSeconds: 30
failureThreshold: 3
readinessProbe: # removes pod from Service endpoints if unhealthy
httpGet:
path: /wp-login.php
port: 80
periodSeconds: 10
failureThreshold: 3
Plugin-heavy WordPress installations can take over 60 seconds on first boot. The startup probe with a 300-second window accounts for this without penalizing fast boots.
Add a TCP probe to the MySQL container (MySQL does not expose HTTP):
# MySQL health probes
startupProbe:
tcpSocket:
port: 3306
failureThreshold: 30
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 3306
periodSeconds: 10
failureThreshold: 3
readinessProbe:
tcpSocket:
port: 3306
periodSeconds: 10
failureThreshold: 3
Checkpoint: every container has a startupProbe, livenessProbe, and readinessProbe.
Step 7: replace depends_on with init containers
Docker Compose depends_on ensures the database container starts before WordPress. Kubernetes has no equivalent field. Kompose silently drops it. Without a replacement, WordPress tries to connect to MySQL before MySQL is ready, hitting Error establishing a database connection on first deploy.
The solution: add an init container to the WordPress Deployment that waits for MySQL to accept connections.
# Add to the WordPress Deployment spec.template.spec
initContainers:
- name: wait-for-mysql
image: mysql:8.0 # reuse the MySQL image since it has mysqladmin
command:
- sh
- -c
- |
until mysqladmin ping -h db --silent; do
echo "Waiting for MySQL..."
sleep 3
done
env:
- name: MYSQL_PWD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-root-password
Init containers run sequentially, to completion, before the main containers start. Once mysqladmin ping succeeds, WordPress boots with a working database connection.
Checkpoint: deploy the WordPress pod and check kubectl describe pod <wordpress-pod> -n wordpress. You should see the wait-for-mysql init container with status Completed before the main container starts.
Step 8: set the right deployment strategy
This is the step people skip, and then spend hours debugging.
The default Kubernetes deployment strategy is RollingUpdate: create a new pod before terminating the old one. With ReadWriteOnce PVCs (the most common type for cloud block storage), only one pod can mount the volume at a time. During a rolling update, the new pod hangs in Pending with a Multi-Attach error because the old pod still holds the volume.
For single-replica workloads with ReadWriteOnce PVCs, use Recreate:
# In both wordpress-deployment.yaml and db-deployment.yaml
spec:
strategy:
type: Recreate # terminate old pod before creating new pod
This causes brief downtime during updates. For a single-replica WordPress + MySQL setup, that tradeoff is acceptable and avoids the Multi-Attach deadlock entirely.
If you need zero-downtime for WordPress, you need ReadWriteMany storage (NFS, AWS EFS, Azure Files) so multiple pods can mount the same volume. That is a different architecture than what Docker Compose gives you.
Checkpoint: both Deployments have strategy.type: Recreate.
Step 9: apply and verify
Apply all manifests to the wordpress namespace:
kubectl apply -f . -n wordpress
Monitor pod status:
kubectl get pods -n wordpress -w
# Expected: init container runs first, then both pods reach Running/Ready
Check PVC binding:
kubectl get pvc -n wordpress
# Expected: STATUS = Bound for both PVCs
If a PVC stays in Pending, the most likely cause is a missing or mismatched StorageClass. Verify with kubectl describe pvc <name> -n wordpress and compare the storageClassName against kubectl get storageclass.
Wait for the WordPress rollout:
kubectl rollout status deployment/wordpress -n wordpress
# Expected: "deployment 'wordpress' successfully rolled out"
Access the WordPress setup wizard:
kubectl port-forward svc/wordpress 8080:80 -n wordpress
# Open http://localhost:8080 in your browser
Checkpoint: you see the WordPress installation wizard. The database connection works. Both PVCs are Bound.
Three misconceptions that trip people up
"Kompose output is production-ready"
Kompose is scaffolding. The official Kompose documentation states conversions are "not always 1-to-1." Generated manifests lack resource limits (risk: pod eviction and node starvation), health probes (risk: traffic routed to dead pods), and secrets handling (risk: passwords in Git). Steps 3 through 8 of this tutorial exist entirely because Kompose output is incomplete.
"Docker Compose volumes map directly to Kubernetes volumes"
Docker Compose named volumes are host-local storage. Kubernetes offers multiple volume types, and Kompose creates PVCs that may reference a StorageClass that does not exist in your cluster. Without a valid StorageClass, the PVC hangs in Pending forever with no obvious error. On Minikube, the default hostPath provisioner stores data in /tmp, which is lost on rescheduling.
"Docker Compose networks map to Kubernetes namespaces"
They are fundamentally different. Docker Compose networks are per-project bridge networks for DNS isolation. Kubernetes uses flat networking: every pod can reach every other pod by IP address across the entire cluster. Namespaces are administrative boundaries (RBAC, resource quotas), not network isolation. If you need network isolation post-migration, you need NetworkPolicy objects.
Production hardening checklist
Before running this in production, verify each item:
- [ ] Resource requests and limits on every container
- [ ] Liveness, readiness, and startup probes on every container
- [ ] All sensitive values in Kubernetes Secrets, no plaintext in YAML committed to Git
- [ ] StorageClass confirmed with
kubectl get storageclass - [ ] PVC reclaim policy set to
Retainfor database volumes (kubectl patch pv <name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}') - [ ] Deployment strategy matches PVC access mode (
RecreateforReadWriteOnce) - [ ] Dedicated ServiceAccount per application with minimal RBAC
- [ ] Ingress with TLS termination for external access (see cert-manager NGINX tutorial)
- [ ] NetworkPolicy if inter-service isolation is required
What you learned
You started with a Docker Compose file and ended with production-hardened Kubernetes manifests. The critical insight: Kompose handles the structural conversion, but the security, reliability, and operational gaps are yours to fill.
The concepts you applied here (Secrets, PVCs, init containers, health probes, deployment strategy) are the same ones you will use for any Kubernetes workload, not just Docker Compose migrations. For production WordPress on Kubernetes specifically, the Bitnami WordPress Helm chart bundles these patterns into a single, maintained package. The manual path in this tutorial teaches you what that chart does under the hood.