GitOps with Argo CD: declarative Kubernetes deployments

Argo CD turns a Git repository into the single source of truth for your cluster state. You commit manifests, Argo CD reconciles them. This tutorial walks through a full GitOps setup: installing Argo CD with Helm, deploying an application via the Application CRD, configuring sync policies, ordering resources with sync waves, bootstrapping a cluster with the app-of-apps pattern, handling secrets safely, and managing multiple clusters from one control plane.

What you will learn

By the end of this tutorial you will have a working Argo CD installation managing a Kubernetes application through Git. You will understand the Application CRD, sync policies, sync waves, the app-of-apps pattern, how to keep secrets out of Git, and how to extend the setup across multiple clusters.

Prerequisites

  • A Kubernetes cluster (v1.21 or later; a local kind or minikube cluster works for the install and Application steps)
  • Helm 3.x or 4.x installed
  • kubectl configured against the cluster
  • A Git repository you control (GitHub, GitLab, Bitbucket, or self-hosted)
  • Familiarity with Kubernetes Deployments, Services, and namespaces

Table of contents

What GitOps means

GitOps was coined by Weaveworks in 2017 and formalized by the OpenGitOps project (a CNCF working group) into four principles:

  1. Declarative. The entire desired system state is expressed as declarations in Git (Kubernetes manifests, Helm charts, Kustomize overlays).
  2. Versioned and immutable. Every change is a Git commit. Rollbacks become git revert. Audit trails become git log.
  3. Pulled automatically. An agent running inside the cluster pulls changes from Git. The CI pipeline never needs cluster credentials.
  4. Continuously reconciled. The agent detects drift between Git and the live cluster and corrects it. Manual kubectl edits are overwritten.

The pull model is the security cornerstone. In traditional CI/CD, the pipeline pushes into the cluster, so it holds cluster credentials. In GitOps, the agent initiates every connection outbound, and nothing external pushes in.

Argo CD is the dominant tool implementing these principles. It graduated from the CNCF in December 2022 and, according to the 2025 CNCF End User Survey, runs in nearly 60% of Kubernetes clusters doing application delivery.

Install Argo CD with Helm

Argo CD v3 (GA April 2025) is the current stable release. The official Helm chart is the canonical distribution method.

Add the repository and install:

helm repo add argo https://argoproj.github.io/argo-helm
helm repo update

helm install argocd argo/argo-cd \
  --namespace argocd \
  --create-namespace \
  --version 7.8.0        # pin to a specific chart version

For a production cluster, create argocd-values.yaml with HA settings:

# argocd-values.yaml — Argo CD HA configuration (chart 7.x)
server:
  replicas: 2
  resources:
    requests:
      cpu: 100m
      memory: 128Mi
    limits:
      cpu: 500m
      memory: 512Mi

controller:
  replicas: 1              # StatefulSet; enable sharding for multiple replicas
  resources:
    requests:
      cpu: 250m
      memory: 256Mi
    limits:
      cpu: 1000m
      memory: 1Gi

repoServer:
  replicas: 2
  resources:
    requests:
      cpu: 100m
      memory: 128Mi
    limits:
      cpu: 500m
      memory: 512Mi

redis-ha:
  enabled: true            # swap single Redis for a Sentinel-based HA cluster

configs:
  params:
    server.insecure: "true" # terminate TLS at your Ingress, not at the Argo CD server

Install with the values file:

helm install argocd argo/argo-cd \
  --namespace argocd \
  --create-namespace \
  -f argocd-values.yaml

Checkpoint. Verify all pods are running:

kubectl get pods -n argocd

Expected output (HA install): argocd-server, argocd-repo-server, argocd-application-controller, argocd-redis-ha-server, argocd-applicationset-controller, and argocd-dex-server pods in Running state.

Access the UI. Port-forward for initial access:

kubectl port-forward svc/argocd-server -n argocd 8080:443

Retrieve the initial admin password, then change it immediately:

# Retrieve password
kubectl -n argocd get secret argocd-initial-admin-secret \
  -o jsonpath="{.data.password}" | base64 -d && echo

# After logging in and changing the password, delete the secret
kubectl -n argocd delete secret argocd-initial-admin-secret

Open https://localhost:8080, log in as admin with the retrieved password, and change it through User Info > Update Password in the UI.

Your first Application

The Application CRD is the core building block. It tells Argo CD what to deploy, from where, and how to keep it synchronized.

Push a simple nginx Deployment and Service to your Git repository under deploy/nginx/:

# deploy/nginx/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.27-alpine
          ports:
            - containerPort: 80
---
# deploy/nginx/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80

Now create the Application manifest:

# bootstrap/nginx-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: nginx
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io   # cascade-delete cluster resources on app deletion
spec:
  project: default
  source:
    repoURL: https://github.com/yourorg/gitops-demo.git
    targetRevision: main
    path: deploy/nginx
  destination:
    server: https://kubernetes.default.svc     # local cluster
    namespace: demo
  syncPolicy:
    automated:
      prune: true          # delete resources removed from Git
      selfHeal: true       # revert manual kubectl changes
    syncOptions:
      - CreateNamespace=true
    retry:
      limit: 5
      backoff:
        duration: 5s
        factor: 2
        maxDuration: 3m

Apply it:

kubectl apply -f bootstrap/nginx-app.yaml

Checkpoint. After 30 seconds (Argo CD polls Git every 3 minutes by default, but the first sync triggers immediately on creation):

kubectl get application nginx -n argocd

Expected: STATUS: Synced, HEALTH: Healthy. The demo namespace now contains 2 nginx pods and a Service.

Argo CD tracks two dimensions for each Application. Sync status (Synced, OutOfSync) tells you whether the cluster matches Git. Health status (Healthy, Progressing, Degraded) tells you whether resources are actually working. An Application can be Synced but Degraded if manifests were applied but pods are crash-looping.

Helm and Kustomize sources

The source field supports more than plain YAML. For a Helm chart:

source:
  repoURL: https://charts.bitnami.com/bitnami
  chart: postgresql
  targetRevision: 16.4.3           # semver chart version
  helm:
    releaseName: pg-primary
    valueFiles:
      - values.yaml
      - values-production.yaml
    parameters:
      - name: auth.postgresPassword
        value: "changeme"          # override a single value

For Kustomize, point path at a directory containing kustomization.yaml and Argo CD detects it automatically.

Sync policies and options

The syncPolicy block controls how Argo CD reconciles.

Option What it does Default
automated.prune Deletes resources no longer in Git false
automated.selfHeal Reverts manual changes back to Git state false
automated.allowEmpty Allows syncing when source produces zero resources false
syncOptions.ServerSideApply Uses Kubernetes Server-Side Apply (handles large CRDs, managedFields conflicts) false
syncOptions.PruneLast Defers orphan deletion until after new resources succeed false
syncOptions.ApplyOutOfSyncOnly Only applies resources that actually differ false
syncOptions.CreateNamespace Creates the destination namespace if missing false

ignoreDifferences for external controllers. When a HorizontalPodAutoscaler manages replicas, Argo CD reports false drift every time the HPA scales. Suppress it:

spec:
  ignoreDifferences:
    - group: apps
      kind: Deployment
      jsonPointers:
        - /spec/replicas

Add RespectIgnoreDifferences=true to syncOptions so these fields are also skipped during sync, not just during diff.

Sync waves and hooks

Sync waves control the order of resource application within a single sync. Hooks run at specific lifecycle phases.

Waves. Annotate any resource with argocd.argoproj.io/sync-wave. Resources without the annotation default to wave 0. Argo CD applies a wave, waits for all resources in it to become Healthy, then moves to the next.

# Wave -1: namespace first
apiVersion: v1
kind: Namespace
metadata:
  name: my-app
  annotations:
    argocd.argoproj.io/sync-wave: "-1"
---
# Wave 0 (default): database
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  annotations:
    argocd.argoproj.io/sync-wave: "0"
---
# Wave 1: application (after database is healthy)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  annotations:
    argocd.argoproj.io/sync-wave: "1"

Hooks. Hooks are resources (typically Jobs) annotated with argocd.argoproj.io/hook:

Phase When Use case
PreSync Before manifests apply Database migrations, schema checks
PostSync After all resources are Healthy Smoke tests, cache warming
SyncFail Only on failure Cleanup, rollback scripts, alerting

A database migration as a PreSync hook:

apiVersion: batch/v1
kind: Job
metadata:
  name: db-migrate
  annotations:
    argocd.argoproj.io/hook: PreSync
    argocd.argoproj.io/hook-delete-policy: BeforeHookCreation
spec:
  backoffLimit: 2
  activeDeadlineSeconds: 300       # fail after 5 minutes; never block indefinitely
  template:
    spec:
      restartPolicy: Never
      containers:
        - name: migrate
          image: liquibase/liquibase:4.31-alpine
          args:
            - --changeLogFile=changelog/db.changelog-master.xml
            - --url=$(DATABASE_URL)
            - update
          envFrom:
            - secretRef:
                name: db-credentials

Always set activeDeadlineSeconds on hook Jobs. A hung PreSync job blocks the entire deployment indefinitely.

App-of-apps pattern

The app-of-apps pattern uses a single parent Application that points to a Git directory containing child Application manifests. When the parent syncs, it creates, updates, or deletes child Applications. Think of it as a bootstrap: apply one manifest, and the entire cluster stack materializes.

Repository structure:

gitops-repo/
├── bootstrap/
│   └── parent-app.yaml          # apply once manually
├── apps/
│   ├── nginx-ingress.yaml       # child Application
│   ├── cert-manager.yaml        # child Application
│   ├── prometheus-stack.yaml    # child Application
│   └── my-service.yaml          # child Application
└── charts/
    └── my-service/
        ├── Chart.yaml
        └── values.yaml

The parent:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: cluster-bootstrap
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/yourorg/gitops-repo.git
    targetRevision: main
    path: apps
  destination:
    server: https://kubernetes.default.svc
    namespace: argocd
  syncPolicy:
    automated: null                # keep parent manual for safety

Keep the parent's sync manual (automated: null). If the apps/ directory is accidentally emptied and the parent has prune: true, it cascade-deletes every child Application. Child Applications themselves can safely use automated: {prune: true, selfHeal: true}.

When to use ApplicationSet instead

ApplicationSet generates Application manifests dynamically from templates and generators. Use it when the set of Applications is not fixed: multi-cluster deployments, per-team namespaces, or any scenario where adding a folder or cluster label should automatically produce a new Application.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: cluster-addons
  namespace: argocd
spec:
  generators:
    - git:
        repoURL: https://github.com/yourorg/gitops-repo.git
        revision: HEAD
        directories:
          - path: apps/*          # each subdirectory becomes an Application
  template:
    metadata:
      name: ''
    spec:
      project: default
      source:
        repoURL: https://github.com/yourorg/gitops-repo.git
        targetRevision: HEAD
        path: ''
      destination:
        server: https://kubernetes.default.svc
        namespace: ''
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
          - CreateNamespace=true

Rule of thumb: app-of-apps for a fixed set of platform components (ingress, monitoring, cert-manager). ApplicationSet for dynamic workloads across environments or clusters.

Secrets in GitOps

Git history is permanent. A Kubernetes Secret committed in plaintext remains recoverable even after deletion. The rule is absolute: never store plaintext secrets in Git. Argo CD itself does not inject secrets into manifests (it would cache them in Redis). Use one of these approaches.

Sealed Secrets

Sealed Secrets (Bitnami) runs a controller in-cluster that holds a private key. You encrypt secrets locally with the public key. The resulting SealedSecret CRD is safe to commit; only the in-cluster controller can decrypt it.

# Create and seal a secret
kubectl create secret generic db-credentials \
  --from-literal=username=admin \
  --from-literal=password=supersecret \
  --dry-run=client -o yaml | \
  kubeseal --format yaml > manifests/sealed-db-credentials.yaml

Back up the controller's private key separately from Git. Without it, sealed secrets become unrecoverable after a cluster rebuild.

External Secrets Operator

External Secrets Operator (ESO) pulls secrets from external stores (AWS Secrets Manager, Azure Key Vault, GCP Secret Manager, HashiCorp Vault) at runtime and creates native Kubernetes Secret objects. Your Git repository contains only references:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: db-credentials
  namespace: my-app
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secrets-manager
    kind: ClusterSecretStore
  target:
    name: db-credentials
    creationPolicy: Owner
  data:
    - secretKey: username
      remoteRef:
        key: my-app/database
        property: username
    - secretKey: password
      remoteRef:
        key: my-app/database
        property: password

ESO handles automatic rotation via refreshInterval. When secrets rotate, Argo CD may report spurious drift on the generated Secret. Suppress with ignoreDifferences on the Secret's /data field.

Comparison

Sealed Secrets External Secrets Operator
Setup complexity Low Medium
External dependencies None Secret provider (AWS, Vault, etc.)
Secret rotation Manual re-seal Automatic
Git stores Encrypted blob References only
Best for Small teams, simple setups Organizations with existing secret stores

Multi-cluster management

The recommended architecture: a dedicated management cluster runs Argo CD and manages target clusters (dev, staging, production, multi-region). The management cluster holds cluster credentials but never runs workloads.

Register a cluster:

argocd cluster add prod-us-context \
  --name production-us-east \
  --label env=production \
  --label region=us-east-1

Use an ApplicationSet with the cluster generator to deploy across all matching clusters:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: my-app-production
  namespace: argocd
spec:
  generators:
    - clusters:
        selector:
          matchLabels:
            env: production
  template:
    metadata:
      name: 'my-app-'
    spec:
      project: production-project
      source:
        repoURL: https://github.com/yourorg/my-app.git
        targetRevision: main
        path: k8s/overlays/production
      destination:
        server: ''
        namespace: my-app
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
          - CreateNamespace=true

Adding a new production cluster with the label env: production automatically creates an Application for it. No manifest changes needed.

AppProject for access control

In multi-cluster setups, use AppProject to enforce team boundaries. Never use the default project for production workloads; it allows all repositories and all clusters.

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: production-project
  namespace: argocd
spec:
  sourceRepos:
    - 'https://github.com/yourorg/production-manifests.git'
  destinations:
    - server: https://prod-cluster.example.com
      namespace: '*'
  clusterResourceWhitelist:
    - group: '*'
      kind: '*'
  roles:
    - name: deploy-pipeline
      policies:
        - p, proj:production-project:deploy-pipeline, applications, sync, production-project/*, allow

What you learned

You set up Argo CD from scratch and deployed an application declaratively through Git. The key concepts:

  • GitOps pulls, not pushes. The agent inside the cluster initiates all connections. CI never holds cluster credentials.
  • The Application CRD ties a Git source to a cluster destination with sync policies that control pruning, self-healing, and retry behavior.
  • Sync waves sequence resource creation (namespace before database before application). Hooks run Jobs at lifecycle phases (PreSync for migrations, PostSync for smoke tests).
  • App-of-apps bootstraps a cluster from a single manifest. ApplicationSet automates dynamic environments.
  • Secrets never go into Git. Sealed Secrets encrypts them; External Secrets Operator references them from external stores.
  • Multi-cluster is a label and an ApplicationSet generator away from single-cluster.

The repository structure that serves most teams well: one Git repo for deployment manifests (separate from application source code), Kustomize overlays or Helm value files per environment in directories (not branches), and Argo CD watching the main branch.

Recurring server or deployment issues?

I help teams make production reliable with CI/CD, Kubernetes, and cloud—so fixes stick and deploys stop being stressful.

Explore DevOps consultancy

Search this site

Start typing to search, or browse the knowledge base and blog.