What you will have at the end
A Helm chart that follows the official Helm best practices, passes helm lint and kubeconform validation, uses namespaced helpers, carries correct labels, handles secrets safely, and runs automated tests in CI. The practices target Helm 4.1 (current stable) but apply equally to Helm 3.x charts still in production.
Prerequisites
- Helm 4.x or 3.x installed locally (installation docs)
kubectlconfigured against a development cluster (for integration testing)- A chart you are writing or auditing, or willingness to run
helm createto generate a starter - Familiarity with Kubernetes Deployments, Services, and ConfigMaps
Chart structure and naming
Start with helm create mychart. The generated scaffold follows the official chart structure and gives you correctly namespaced templates out of the box. Customize it; do not use it verbatim.
mychart/
├── Chart.yaml # required: name, version, apiVersion
├── values.yaml # default configuration
├── values.schema.json # optional: JSON Schema for values validation
├── .helmignore # packaging exclusions
├── charts/ # subchart dependencies
├── crds/ # CRD manifests (installed once, never upgraded)
├── templates/
│ ├── _helpers.tpl # named templates (underscore = not rendered)
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── ingress.yaml
│ ├── serviceaccount.yaml
│ ├── hpa.yaml
│ ├── NOTES.txt # post-install message shown to the user
│ └── tests/
│ └── test-connection.yaml
Naming rules from the Helm naming conventions:
- Chart names: lowercase letters and numbers only, hyphens allowed. No underscores, no dots. Must start with a letter.
- Template files: dashed notation (
my-app-configmap.yaml), never camelCase. - One Kubernetes resource per file, filename reflecting the resource kind.
.yamlextension for templates that produce YAML output..tplfor helper files that produce no output on their own.
Chart.yaml: version vs appVersion
These two fields track different things and evolve independently. version is the chart's own version (template logic, default values, structure). appVersion is the application the chart deploys (the container image version). Bumping a template bug from 1.2.3 to 1.2.4 while appVersion stays at 3.0.0 is normal and expected. GitOps controllers like ArgoCD and Flux detect new releases via version, not appVersion. The official docs state explicitly: "The appVersion field is not related to the version field."
values.yaml design
The values file is the public API of your chart. Treat it with the same discipline you would apply to a library's function signatures.
Naming and structure
Use camelCase with a lowercase first letter for every key. The official values guide uses chickenNoodleSoup: true as the canonical example. Uppercase first letters collide with Go template built-ins. Hyphenated names break --set on the command line.
Prefer flat structures over deeply nested ones. Nested values require existence checks at every level in templates, and deep nesting makes --set overrides unwieldy. Use nesting only when a group of related, non-optional values genuinely belongs together (like resources.requests and resources.limits).
Document every key
Every property in values.yaml must have a comment starting with the property name. This serves two purposes: it makes grep discovery instant, and it enables helm-docs to auto-generate README tables.
# replicaCount -- Number of pod replicas for the main deployment
replicaCount: 3
# image.repository -- Container image registry and name
# image.tag -- Overrides the chart appVersion
image:
repository: registry.internal/myapp
tag: ""
Type safety and override ergonomics
Quote all strings to prevent YAML type coercion surprises. A bare yes becomes boolean true. A bare 012 becomes octal integer 10. Large integers can silently turn into scientific notation.
Design for --set. Map-based structures are easier to override than arrays. --set ingress.hosts.myapp\.example\.com.paths[0]=/api is clumsy but possible. --set ingress.hosts[0].host=myapp.example.com is fragile because array indexing requires knowing the current order.
Schema validation with values.schema.json
A values.schema.json file in the chart root enforces types, required fields, enums, and patterns on every helm install, helm upgrade, helm lint, and helm template invocation. It catches misconfiguration before template rendering starts, which is earlier and more informative than a Go template error.
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"required": ["replicaCount", "image"],
"properties": {
"replicaCount": {
"type": "integer",
"minimum": 1
},
"image": {
"type": "object",
"required": ["repository"],
"properties": {
"repository": { "type": "string", "minLength": 1 },
"tag": { "type": "string" }
}
}
}
}
This is one of the most underused Helm features. It also validates subchart values, which makes it a first line of defense in charts with complex dependency trees. Arthur Koziel's guide on values.schema.json covers advanced patterns.
Secrets: never in values.yaml
Helm provides no encryption. Values committed to Git are plaintext. helm get values exposes them to anyone with cluster access. Use the Helm Secrets plugin with Mozilla SOPS for encrypted values files, or reference secrets from HashiCorp Vault or the Kubernetes External Secrets Operator.
Environment separation
Keep a base values.yaml with sensible production-ready defaults. Maintain per-environment override files (values-dev.yaml, values-staging.yaml) and pass them with -f:
# Helm 4.x
helm upgrade --install myapp ./mychart \
-f values-staging.yaml \
--namespace staging
Template helpers and _helpers.tpl
Files starting with _ in templates/ are not rendered as Kubernetes manifests. _helpers.tpl is the conventional home for reusable named templates.
Namespace every template name
Template names are globally scoped across the chart and all subcharts. If your chart defines {{ define "labels" }} and a subchart defines the same, the last-loaded definition wins silently. Always prefix with the chart name:
{{- define "mychart.labels" -}}
app.kubernetes.io/name: {{ include "mychart.name" . }}
helm.sh/chart: {{ include "mychart.chart" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
Use include, not template
The built-in template action inserts output inline and cannot be piped. include returns a string, which means you can control indentation:
# Correct: output can be indented
metadata:
labels:
{{- include "mychart.labels" . | nindent 4 }}
# Broken: template cannot be piped to nindent
metadata:
labels:
{{ template "mychart.labels" . }}
The official template guide calls include "considered preferable" specifically because of this.
Enforce required values
Use the required function to fail rendering with a clear error when a mandatory value is missing, rather than producing a manifest with empty fields that fails silently at apply time:
{{ required "A valid .Values.database.host entry is required" .Values.database.host }}
The tpl function
When a value itself contains Go template syntax (common for dynamic annotations or configuration snippets), render it with tpl:
annotations:
{{- with .Values.podAnnotations }}
{{- tpl (toYaml .) $ | nindent 4 }}
{{- end }}
Document your helpers
Every {{ define ... }} block should carry a {{/* ... */}} comment explaining its purpose. Future maintainers will not reconstruct your logic from the template syntax alone.
Labels and annotations
The Helm labels guide draws a clear line: labels are for identification and querying (kubectl get -l), annotations are for non-queryable metadata (Helm hooks, Prometheus scrape config, cert-manager directives).
Recommended label set
| Label | Value | Purpose |
|---|---|---|
app.kubernetes.io/name |
{{ include "mychart.name" . }} |
Identifies the application |
helm.sh/chart |
{{ include "mychart.chart" . }} |
Tracks chart name + version |
app.kubernetes.io/managed-by |
{{ .Release.Service }} |
Identifies the tool managing the resource |
app.kubernetes.io/instance |
{{ .Release.Name }} |
Distinguishes between multiple installs |
app.kubernetes.io/version |
{{ .Chart.AppVersion }} |
Tracks the application version |
Selector labels must be immutable
The selectorLabels helper (mychart.selectorLabels in the generated scaffold) should include only app.kubernetes.io/name and app.kubernetes.io/instance. Never put helm.sh/chart or app.kubernetes.io/version in spec.selector.matchLabels. These change on every upgrade, and Kubernetes does not allow selector changes on existing Deployments and StatefulSets.
Resource management in templates
Image specification
Never use floating tags like latest, head, or canary as defaults. A values.yaml should default to a pinned version tag or reference {{ .Chart.AppVersion }}. Separate image.repository and image.tag in values so operators can override the tag without touching the registry path. Set imagePullPolicy: IfNotPresent as the default, which is what helm create generates and what the Helm pods guide recommends.
Resource requests and limits
Always template both requests and limits as configurable values. Containers without limits can starve adjacent pods on the same node. For a deeper explanation of how the scheduler and kernel use these values, see Kubernetes resource requests and limits.
Security context
Default to a restricted security posture and let operators relax it if their workload requires it:
securityContext:
runAsNonRoot: true
runAsUser: 1000
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
This is also a requirement for OpenShift compatibility and aligns with the Kubernetes Pod Security Standards restricted profile.
Rolling updates on ConfigMap changes
A Deployment spec does not change when its referenced ConfigMap changes. Pods keep running with stale configuration. The Helm tips and tricks guide documents the checksum annotation pattern:
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
This triggers a rolling update whenever the ConfigMap content changes.
Chart hooks
Helm hooks inject imperative steps into the chart lifecycle. The hooks documentation defines nine types: pre-install, post-install, pre-upgrade, post-upgrade, pre-delete, post-delete, pre-rollback, post-rollback, and test.
A typical database migration hook:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-db-migrate"
annotations:
"helm.sh/hook": pre-upgrade,pre-install
"helm.sh/hook-weight": "-5" # lower weight runs first
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
template:
spec:
restartPolicy: Never
containers:
- name: migrate
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command: ["./migrate", "--target", "latest"]
Rules:
- Always set
restartPolicy: Neveron hook Jobs. - Always define
helm.sh/hook-delete-policy. Without it, old hook resources accumulate in the cluster. - Use
helm.sh/hook-weightas a string (it is an annotation value) to control execution order. - Combine lifecycle events when the same operation applies to both install and upgrade:
"helm.sh/hook": pre-install,pre-upgrade.
When hooks break: the template-pipe problem
Hooks only execute when Helm actively manages the installation. If the chart is applied via helm template | kubectl apply (a common GitOps pattern), hooks are silently skipped. A pre-install hook that creates a ConfigMap will not run, and the Deployment that depends on it fails.
If your chart must work with GitOps tools like ArgoCD or Flux that render manifests before applying, use Kubernetes-native init containers instead of hooks for dependency sequencing. Document the deployment-method assumption in your chart's README if hooks are critical.
CRDs: use the crds/ directory
Do not use the old crd-install hook pattern from Helm 2. Place CRD YAML files in the crds/ directory. Helm installs them on first install but does not update them on upgrade to protect existing custom resource data. For CRD lifecycle management across upgrades, use a dedicated CRD chart or a CRD management operator.
Testing your chart
helm lint validates chart structure. It does not validate whether the rendered manifests are valid Kubernetes objects. A chart that passes helm lint can still produce manifests that fail to apply because of wrong field names or deprecated API versions. Alexandre Vazquez's testing guide documents this gap clearly.
Four layers of chart testing
1. Linting (no cluster required):
helm lint ./mychart
yamllint ./mychart/values.yaml
Fast, catches structural errors and YAML syntax issues. Run as a pre-commit hook.
2. Schema validation (no cluster required):
# kubeconform is the maintained successor to the deprecated kubeval
helm template my-release ./mychart | kubeconform \
--kubernetes-version 1.30.0 \
--strict \
--ignore-missing-schemas
Validates rendered manifests against the Kubernetes API schema for a specific version. Catches field name mistakes, deprecated API versions, and type errors.
3. Unit testing (no cluster required):
The helm-unittest plugin tests template logic against fixture values without a cluster:
# tests/deployment_test.yaml
suite: Deployment tests
templates:
- deployment.yaml
tests:
- it: should set the correct replica count
set:
replicaCount: 5
asserts:
- equal:
path: spec.replicas
value: 5
- it: should use the image from values
asserts:
- matchRegex:
path: spec.template.spec.containers[0].image
pattern: registry.internal/myapp:.*
4. Integration testing (requires a cluster):
The chart-testing (ct) tool, an official Helm/CNCF project, installs the chart in a live cluster, runs helm test, and validates upgrades from previous versions:
ct install --config ct.yaml --charts ./mychart
helm test
Any Pod or Job in templates/ annotated with "helm.sh/hook": test is a test resource. The container must exit 0 to pass. Organize tests under templates/tests/ and add "helm.sh/hook-delete-policy": before-hook-creation to avoid accumulating test pods between runs.
Minimum CI pipeline
# Pre-commit (developer workstation):
helm lint ./mychart
yamllint .
# Pull request (automated):
helm template my-release ./mychart | kubeconform --kubernetes-version 1.30.0 --strict
trivy helm --exit-code 1 --severity CRITICAL,HIGH ./mychart/
# Staging (pre-production, requires cluster):
ct install --config ct.yaml
Helm 4 changes that affect chart authors
Helm 4.0, released November 12, 2025, introduced changes that chart authors should be aware of:
- Server-side apply (SSA) is the default for new installations. Releases originally installed by Helm 3 continue using client-side apply unless
--server-sideis passed explicitly. --atomicrenamed to--rollback-on-failure, and--forcerenamed to--force-replace. Charts or CI scripts referencing the old flag names need updating.- Post-renderers now require plugin names. Passing arbitrary executables via
--post-rendererno longer works. - Chart format v3 is listed as "coming soon" but has no published specification yet. Charts with
apiVersion: v2continue to work without changes.
Helm 3 receives security fixes until November 11, 2026 and bug fixes until July 8, 2026.
Common troubleshooting
helm lintpasses butkubectl applyfails: lint does not validate Kubernetes schema. Pipe throughkubeconformas described in the testing section.- Subchart overwrites your helper template: template names are globally scoped. Namespace every
{{ define }}with your chart name. - ConfigMap changes do not trigger a pod restart: add the
checksum/configannotation to the pod template. - Hook Job not running in ArgoCD: hooks require Helm-managed installs. Use ArgoCD's resource hooks or init containers instead.
--setbehaves unexpectedly with nested values: check for camelCase violations, unquoted strings, or array-index fragility.