Goal
At the end of this article you will have a working Ingress resource that routes external HTTP/HTTPS traffic to one or more backend Services using host-based and path-based rules, with automated TLS certificates via cert-manager.
Prerequisites
- A Kubernetes cluster running v1.28 or later with
kubectlaccess and permissions to create Ingress, IngressClass, and Secret objects - At least one application deployed behind a ClusterIP Service. Ingress routes to Services, not directly to pods. If you are unfamiliar with Service types, the linked article covers ClusterIP, NodePort, LoadBalancer, and when to use each.
- An Ingress controller installed in the cluster (see the next section). Without a controller, Ingress resources are inert.
- For automated TLS: cert-manager v1.16+ installed with a working ClusterIssuer. The cert-manager documentation covers installation.
- Helm 3.x installed locally (for controller installation examples)
- DNS records pointing your domain(s) to the Ingress controller's external IP
Ingress vs Service LoadBalancer
Ingress operates at OSI Layer 7 (HTTP/HTTPS). A single Ingress controller sits behind one cloud load balancer and routes traffic to many backend Services based on hostname and URL path. A Service of type LoadBalancer operates at Layer 4 (TCP/UDP) and provisions one cloud load balancer per Service.
The cost difference grows fast. Ten LoadBalancer Services mean ten cloud load balancers, each with its own IP and monthly cost. On AWS, an NLB runs roughly $16/month before data transfer. Ingress collapses all HTTP/HTTPS routing behind a single load balancer.
| Criteria | LoadBalancer Service | Ingress |
|---|---|---|
| Protocol | TCP, UDP, any | HTTP/HTTPS only |
| Routing logic | None (IP + port) | Host-based, path-based |
| TLS termination | No (pass-through or handled elsewhere) | Yes, at the edge |
| Cloud LBs provisioned | One per Service | One total |
| Use case | Non-HTTP protocols, single-service exposure | Multiple HTTP services behind one IP |
Use LoadBalancer for non-HTTP protocols (raw TCP, UDP, gRPC without HTTP transcoding). Use Ingress for everything HTTP/HTTPS.
IngressClass and controller selection
An IngressClass is a cluster-scoped resource that binds an Ingress to a specific controller implementation. Without it, a cluster with multiple controllers cannot determine which one should process a given Ingress.
Step 1: install an Ingress controller
The Kubernetes API defines the Ingress resource, but ships no controller. You must install one. For clusters using ingress-nginx: the community controller (kubernetes/ingress-nginx) was retired in March 2026 with no further security patches. For new deployments, choose an actively maintained controller.
Common choices as of April 2026:
- Traefik (traefik.io): dynamic service discovery, native Let's Encrypt support, straightforward migration from ingress-nginx
- HAProxy Ingress (haproxy-ingress.github.io): highest raw throughput in published benchmarks (~42,000 req/s vs ~19,000 for Traefik), zero-downtime reloads, HTTP/3 native
- Contour (projectcontour.io): Envoy-based, good Gateway API support
- AWS Load Balancer Controller: cloud-native on EKS
Example installation (Traefik via Helm):
helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install traefik traefik/traefik \
--namespace traefik --create-namespace
Verify the controller is running:
kubectl get pods -n traefik
# Expected: traefik-<hash> in Running state
Step 2: verify the IngressClass
Most Helm charts create an IngressClass automatically. Check:
kubectl get ingressclass
Expected output:
NAME CONTROLLER PARAMETERS AGE
traefik traefik.io/ingress-controller <none> 2m
If your controller's IngressClass has the annotation ingressclass.kubernetes.io/is-default-class: "true", Ingress resources that omit ingressClassName automatically use it. In clusters with multiple controllers, always set ingressClassName explicitly on every Ingress.
Step 3: reference the IngressClass in your Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
spec:
ingressClassName: traefik # must match the IngressClass name
rules: [...]
If the IngressClass name is wrong or missing and no default is set, the controller ignores the Ingress entirely. No error, no event, just silence. This is the single most common setup mistake.
Host-based routing
Host-based routing inspects the Host HTTP header and sends traffic to different Services based on the hostname.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-host
namespace: production
spec:
ingressClassName: traefik
rules:
- host: app.staging.infra.example.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: frontend
port:
number: 80
- host: api.staging.infra.example.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: backend-api
port:
number: 8080
Requests to app.staging.infra.example.com route to the frontend Service. Requests to api.staging.infra.example.com route to backend-api.
Kubernetes supports single-level wildcard hosts: *.example.com matches app.example.com and api.example.com, but not example.com (bare domain) or foo.bar.example.com (two levels deep).
A rule without a host field is a catch-all that matches all incoming requests regardless of hostname.
Path-based routing and pathType
Within a host block, path rules route requests to different backends based on the URL path. Every path rule requires a pathType field (mandatory since Kubernetes v1.18).
Prefix (most common)
Matches the path element-by-element. /api matches /api, /api/, and /api/v1/users, but not /apiv1. The boundary is always a / character.
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
Exact
Matches only the literal path. /api matches /api but not /api/ (trailing slash) and not /api/v1.
- path: /healthz
pathType: Exact
backend:
service:
name: health-service
port:
number: 8080
Use Exact sparingly. A user or load balancer appending a trailing slash breaks the match.
ImplementationSpecific
Delegates matching to the controller. With ingress-nginx and the annotation nginx.ingress.kubernetes.io/use-regex: "true", the path becomes a POSIX extended regular expression.
Warning: enabling use-regex on any Ingress for a given host forces case-insensitive regex matching on all paths for that host, including paths on other Ingress resources. This is a cluster-wide side effect that can break Exact and Prefix matching for sibling services.
Path precedence
When multiple rules match, the longest matching path wins. Exact beats Prefix at equal path length.
Combined example
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-routing
namespace: production
spec:
ingressClassName: traefik
rules:
- host: app.staging.infra.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: backend-api
port:
number: 8080
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Requests to /api/v1/orders match the /api Prefix rule and go to backend-api. Everything else falls through to the / rule and goes to frontend.
Verify that routing works:
kubectl describe ingress app-routing -n production
# Check the "Rules" section for correct host/path/backend mappings
curl -H "Host: app.staging.infra.example.com" http://<ingress-controller-ip>/api/health
# Expected: response from backend-api
TLS termination
TLS termination means the Ingress controller decrypts HTTPS at the edge. Backend Services receive plain HTTP, which centralizes certificate management and offloads cryptographic overhead from application pods.
Step 1: create a TLS Secret
The Secret must exist in the same namespace as the Ingress. It must contain tls.crt (PEM certificate chain) and tls.key (PEM private key).
kubectl create secret tls app-tls-cert \
--cert=fullchain.pem \
--key=privkey.pem \
-n production
Step 2: reference the Secret in the Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-ingress
namespace: production
spec:
ingressClassName: traefik
tls:
- hosts:
- app.staging.infra.example.com
secretName: app-tls-cert
rules:
- host: app.staging.infra.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Every host listed under spec.tls[].hosts must also appear in spec.rules. The Ingress API only supports TLS on port 443.
Verify TLS is active:
kubectl describe ingress tls-ingress -n production
# Look for "TLS: app-tls-cert terminates app.staging.infra.example.com"
curl -v https://app.staging.infra.example.com/
# Expected: TLS handshake with the correct certificate
Step 3: automate certificates with cert-manager
Managing certificates manually does not scale. cert-manager watches for Ingress resources with its annotations, automatically creates Certificate objects, issues certificates via ACME (Let's Encrypt) HTTP-01 or DNS-01 challenges, and renews before expiry.
Add the cert-manager annotation and a secretName that cert-manager will create and maintain:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: production
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod # references your ClusterIssuer
spec:
ingressClassName: traefik
tls:
- hosts:
- app.staging.infra.example.com
secretName: app-auto-tls # cert-manager creates this Secret
rules:
- host: app.staging.infra.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Verify the certificate was issued:
kubectl get certificate -n production
# Expected: app-auto-tls with READY=True
kubectl describe certificate app-auto-tls -n production
# Check "Status > Conditions" for "Ready: True" and the expiry date
If the certificate stays in a non-ready state, check the cert-manager logs and the Challenge resource:
kubectl get challenges -n production
kubectl logs -n cert-manager deploy/cert-manager
Common ingress-nginx annotations
If you are running the community ingress-nginx controller (or its NGINX Inc. successor), annotations prefixed with nginx.ingress.kubernetes.io/ customize behavior per-Ingress. These are controller-specific and do not apply to Traefik, HAProxy, or other controllers.
This section covers the annotations I see used most in production. A complete reference lives in the ingress-nginx annotation docs.
Routing and rewrites
| Annotation | Purpose | Example |
|---|---|---|
rewrite-target |
Rewrite the path before forwarding; supports capture groups | /$2 |
use-regex |
Enable POSIX regex path matching | "true" |
backend-protocol |
Protocol to the backend: HTTP, HTTPS, GRPC, GRPCS | "HTTPS" |
A common rewrite pattern for stripping a path prefix:
metadata:
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: app.staging.infra.example.com
http:
paths:
- path: /app1(/|$)(.*) # capture group $2 = remainder
pathType: ImplementationSpecific
backend:
service:
name: app1
port:
number: 80
A request to /app1/login arrives at the backend as /login.
Request handling
| Annotation | Purpose | Default | Example |
|---|---|---|---|
proxy-body-size |
Max request body; returns 413 if exceeded | 1m |
"50m" |
proxy-read-timeout |
Upstream read timeout (seconds) | 60 |
"300" |
proxy-connect-timeout |
Connection timeout to upstream (seconds) | 5 |
"30" |
Rate limiting
| Annotation | Purpose | Example |
|---|---|---|
limit-rps |
Requests per second per IP | "20" |
limit-connections |
Concurrent connections per IP | "10" |
Per-replica gotcha. Rate limit values apply per ingress-nginx replica. Three replicas with limit-rps: "10" allows 30 requests per second total from a single IP. Account for this when autoscaling.
TLS control
| Annotation | Purpose | Example |
|---|---|---|
ssl-redirect |
Toggle the automatic HTTP-to-HTTPS redirect (default: "true" when TLS is set) |
"false" |
force-ssl-redirect |
Force redirect even when TLS is terminated externally (e.g., AWS ELB) | "true" |
IP allowlisting
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/8,203.0.113.5/32"
Requests from IPs outside these CIDR ranges receive a 403.
When to migrate to Gateway API
Gateway API is the official successor to the Ingress API. It reached v1.0 GA in October 2023 and is at v1.4.0 as of November 2025. HTTPRoute, Gateway, and GatewayClass are all in the Standard channel with backward-compatibility guarantees.
The main architectural improvement: Gateway API separates infrastructure (Gateway, owned by cluster operators) from routing (HTTPRoute, owned by developers). With Ingress, both concerns are merged into one resource, which forces either overly broad RBAC or tickets back and forth between teams.
Migrate now if:
- You are running kubernetes/ingress-nginx (community). The repository was archived in March 2026. No security patches will follow.
- You are starting a new cluster. There is no migration cost.
- You need advanced routing: traffic splitting, header-based matching, request mirroring, or retries. These are built into Gateway API's HTTPRoute spec, not available in standard Ingress.
Migrate at your pace if:
- You use a maintained controller (Traefik, HAProxy, Contour) that already supports Gateway API as a parallel option. Both Ingress and Gateway API resources can coexist.
The ingress2gateway tool (v1.0, March 2026) converts Ingress manifests to Gateway API resources, supporting 30+ common ingress-nginx annotations. For a full migration walkthrough, see the ingress-nginx to Gateway API migration guide.
Complete example: production-ready Ingress
This manifest brings together host-based routing, path-based routing, TLS with cert-manager, and common annotations:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: production-ingress
namespace: production
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "20m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
nginx.ingress.kubernetes.io/limit-rps: "25"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.staging.infra.example.com
- api.staging.infra.example.com
secretName: production-tls
rules:
- host: app.staging.infra.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
- host: api.staging.infra.example.com
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: api-v1
port:
number: 8080
- path: /v2
pathType: Prefix
backend:
service:
name: api-v2
port:
number: 8080
- path: /
pathType: Prefix
backend:
service:
name: api-v2 # default to latest version
port:
number: 8080
Common troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
| Ingress exists but traffic never arrives | No controller installed, or ingressClassName is wrong/missing |
kubectl get ingressclass and verify the name matches |
| 404 on paths that should match | Wrong pathType (typically Exact where Prefix is needed) |
Switch to Prefix for sub-tree routing |
| TLS handshake fails with "secret not found" | TLS Secret in a different namespace than the Ingress | Move the Secret to the Ingress namespace |
| 502 Bad Gateway | Backend Service has no ready endpoints | kubectl get endpoints <service> to verify pod IPs |
| Rate limits feel too high | limit-rps is per replica, not cluster-wide |
Divide intended limit by replica count |
use-regex: "true" breaks other services on the same host |
Regex mode applies to all paths for that host | Isolate regex-using Ingresses on separate hosts |
For deeper diagnosis, check controller logs:
kubectl logs -n <controller-namespace> deploy/<controller-deployment>