What you will have at the end
A working Gateway API setup (GatewayClass, Gateway, HTTPRoutes) serving all traffic that ingress-nginx currently handles, with cert-manager issuing TLS certificates automatically, and ingress-nginx decommissioned.
Prerequisites
- A Kubernetes cluster running version 1.29 or later (Gateway API v1.0 reached GA in Kubernetes 1.29)
kubectlconfigured with cluster-admin or equivalent RBAC- Helm 3.x installed locally
- An existing ingress-nginx deployment you want to replace
- cert-manager v1.16+ installed (v1.20+ if you need ListenerSet support for multi-tenant TLS)
- DNS access to update A/AAAA/CNAME records for your domains
- Familiarity with Kubernetes Services and how L4/L7 traffic routing works in a cluster
Why ingress-nginx is retired
SIG Network archived the ingress-nginx repository on March 24, 2026. The project had been in best-effort maintenance since November 2025, running on one or two volunteer maintainers. The critical catalyst was CVE-2025-1974 (IngressNightmare), a CVSS 9.8 unauthenticated RCE through the admission webhook that could lead to full cluster compromise. Four more HIGH-severity CVEs followed in February 2026.
With the repository archived, future vulnerabilities will receive no patches. EOL software in the L7 data path triggers findings in SOC 2, PCI-DSS, ISO 27001, and HIPAA frameworks. The official recommendation: migrate to Gateway API.
Gateway API concepts in 60 seconds
Gateway API replaces the monolithic Ingress resource with a role-based hierarchy of three resource types:
| Resource | Scope | Who owns it | Purpose |
|---|---|---|---|
| GatewayClass | Cluster | Infrastructure provider | Identifies which controller manages Gateways |
| Gateway | Namespace | Cluster operator | Declares listeners (ports, protocols, TLS), allowed route namespaces |
| HTTPRoute | Namespace | Application developer | Defines host/path routing, filters, backend targets |
This separation means operators control infrastructure (which ports are open, which TLS certs are used) while developers control routing (which paths go to which services). The same HTTPRoute works across Envoy Gateway, NGINX Gateway Fabric, Cilium, Istio, and other conformant implementations.
Step 1: choose a Gateway API implementation
You need a controller that watches Gateway API resources and programs the data plane. Pick one before you start converting manifests.
| Implementation | Data plane | Best for |
|---|---|---|
| Envoy Gateway | Envoy Proxy | Highest conformance (v1.4.0 full), most active community, best ingress2gateway emitter support |
| NGINX Gateway Fabric | NGINX OSS/Plus | Familiar NGINX data plane; natural continuity if your team knows NGINX internals |
| Cilium | eBPF + Envoy | Already running Cilium as CNI; strong L4 performance |
| Istio | Envoy (sidecar/ambient) | Already running Istio; want unified ingress + service mesh |
For most teams migrating from ingress-nginx with no existing service mesh, Envoy Gateway is the pragmatic default. If your team is deeply familiar with NGINX configuration and wants the smallest behavioral delta, NGINX Gateway Fabric makes sense.
The rest of this guide uses Envoy Gateway in the examples. The Gateway and HTTPRoute manifests are identical regardless of implementation; only the installation step and implementation-specific policies (rate limiting, auth) differ.
Step 2: install Gateway API CRDs and the controller
Install the Gateway API CRDs first, then the controller.
# Install Gateway API v1.5.0 CRDs (standard channel)
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.5.0/standard-install.yaml
Expected output:
customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/referencegrants.gateway.networking.k8s.io created
...
Install Envoy Gateway (v1.7.1):
helm install eg oci://docker.io/envoyproxy/gateway-helm \
--version v1.7.1 \
-n envoy-gateway-system \
--create-namespace
# Wait for the controller to become ready
kubectl wait --timeout=5m \
-n envoy-gateway-system \
deployment/envoy-gateway \
--for=condition=Available
Verify the GatewayClass is accepted:
kubectl get gatewayclass
Expected output:
NAME CONTROLLER ACCEPTED AGE
envoy-gateway gateway.envoyproxy.io/gatewayclass-controller True 30s
The ACCEPTED: True column confirms the controller is watching and ready.
Step 3: inventory your existing Ingress resources
Before converting anything, get a complete picture of what ingress-nginx currently handles.
# List all Ingress resources across namespaces
kubectl get ingress -A -o wide
# Export annotations (this is what ingress2gateway needs to translate)
kubectl get ingress -A -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}: {.metadata.annotations}{"\n"}{end}'
Review the annotations list. Flag any that use configuration-snippet, server-snippet, auth-url, limit-rps, or modsecurity-*. These have no standard Gateway API equivalent and require manual work (see common pitfalls below).
Step 4: convert manifests with ingress2gateway
ingress2gateway v1.0 is the official SIG Network migration tool. It reads your Ingress resources and emits equivalent Gateway API manifests.
# Install via Homebrew
brew install ingress2gateway
# Or via Go
go install github.com/kubernetes-sigs/ingress2gateway@v1.0.0
Generate Gateway API manifests for review:
# All namespaces, from live cluster
ingress2gateway print --providers=ingress-nginx -A > gateway-resources.yaml
# Single namespace
ingress2gateway print --providers=ingress-nginx --namespace production > prod-gateway.yaml
# From local files (no cluster access needed)
ingress2gateway print \
--input-file my-ingress.yaml,other-ingress.yaml \
--providers=ingress-nginx > gateway-resources.yaml
# With Envoy Gateway-specific extensions
ingress2gateway print --providers=ingress-nginx --emitter envoy-gateway > eg-resources.yaml
Read the output carefully. The tool prints WARNING lines for every annotation it cannot translate. Each warning is a manual conversion task.
What ingress2gateway handles automatically
The tool supports 30+ ingress-nginx annotations with verified behavioral equivalence:
| ingress-nginx annotation | Gateway API equivalent |
|---|---|
ssl-redirect |
HTTPRoute RequestRedirect filter (scheme: https) |
rewrite-target |
HTTPRoute URLRewrite filter (ReplacePrefixMatch) |
proxy-set-headers |
HTTPRoute RequestHeaderModifier filter |
enable-cors + related |
HTTPRoute CORS filter (GA in v1.5) |
canary + canary-weight |
HTTPRoute backendRefs with weight |
canary-header |
HTTPRoute header-based matches |
proxy-read-timeout |
HTTPRoute timeouts.backendRequest |
backend-protocol: "HTTPS" |
BackendTLSPolicy |
backend-protocol: "GRPC" |
GRPCRoute |
ssl-passthrough |
TLSRoute with Passthrough listener |
spec.tls[].secretName |
Gateway listener tls.certificateRefs |
What requires manual conversion
configuration-snippet/server-snippet: raw NGINX directive injection. No Gateway API equivalent exists. Find a native HTTPRoute filter or redesign at the application layer.limit-rps/limit-rpm/limit-connections: rate limiting is not standardized. Use an implementation-specific policy (e.g., Envoy Gateway's BackendTrafficPolicy withrateLimit).auth-url: external authentication. Use implementation-specific SecurityPolicy with ExtAuth.modsecurity-*: WAF rules. Requires a separate WAF solution.
Step 5: create the Gateway resource
The Gateway declares your listeners. This is the resource cert-manager will annotate for automatic TLS.
# gateway.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: production-gateway
namespace: envoy-gateway-system
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod # triggers automatic TLS
spec:
gatewayClassName: envoy-gateway
listeners:
- name: http
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
- name: https
hostname: "*.yourcompany.nl" # match your actual domain
port: 443
protocol: HTTPS
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: wildcard-yourcompany-tls # cert-manager creates this
allowedRoutes:
namespaces:
from: All
Apply it and check the status:
kubectl apply -f gateway.yaml
kubectl describe gateway production-gateway -n envoy-gateway-system
# Look for: Programmed: True, Accepted: True
Retrieve the new external IP:
kubectl get gateway production-gateway -n envoy-gateway-system \
-o jsonpath='{.status.addresses[*].value}{"\n"}'
This IP is your new load balancer endpoint. Do not point DNS at it yet.
Step 6: deploy HTTPRoutes
Apply the HTTPRoutes generated by ingress2gateway (or your hand-crafted equivalents). Here is a typical HTTPRoute:
# httproute-app.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: my-app
namespace: production
spec:
parentRefs:
- name: production-gateway
namespace: envoy-gateway-system
sectionName: https # bind to the HTTPS listener
hostnames:
- app.yourcompany.nl
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: my-app-service
port: 8080
After applying, verify the route is accepted:
kubectl describe httproute my-app -n production
You need both conditions to be True:
- Accepted: the Gateway listener accepted this route
- ResolvedRefs: all backend Services and Secrets exist and are accessible
If Accepted is False with reason NotAllowedByListeners, the hostname does not match any Gateway listener, or the namespace is not permitted. If ResolvedRefs is False with reason RefNotPermitted, you need a ReferenceGrant in the target namespace.
Cross-namespace references need ReferenceGrant
If your HTTPRoute in namespace production references a Service in namespace shared-infra, the reference is silently ignored without a ReferenceGrant in the target namespace:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: allow-routes-from-production
namespace: shared-infra
spec:
from:
- group: gateway.networking.k8s.io
kind: HTTPRoute
namespace: production
to:
- group: ""
kind: Service
Step 7: configure cert-manager for Gateway API
cert-manager's Gateway API integration works differently from its Ingress integration. With Ingress, cert-manager watched Ingress resources. With Gateway API, it watches Gateway resources because TLS is configured at the listener level.
Enable Gateway API support
If cert-manager was installed without the Gateway API feature flag, upgrade it:
helm upgrade cert-manager oci://quay.io/jetstack/charts/cert-manager \
--namespace cert-manager \
--set config.enableGatewayAPI=true \
--set crds.enabled=true
If you installed Gateway API CRDs after cert-manager, restart the deployment so it picks up the new CRD types:
kubectl rollout restart deployment/cert-manager -n cert-manager
Configure the ACME issuer for Gateway API
The ClusterIssuer needs to use the gatewayHTTPRoute solver instead of the ingress-based solver:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: admin@yourcompany.nl
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
gatewayHTTPRoute:
parentRefs:
- name: production-gateway
namespace: envoy-gateway-system
kind: Gateway
cert-manager creates a temporary HTTPRoute for the ACME HTTP-01 challenge, validates it through the Gateway's HTTP listener, then deletes it. The Gateway must have an HTTP (port 80) listener that allows routes from the cert-manager namespace (or from: All).
Verify certificates are being issued:
kubectl get certificates -n envoy-gateway-system
# READY column should show True
Step 8: test against the Gateway IP
Both controllers are now running. ingress-nginx is still handling production traffic; the new Gateway has its own IP. Test the Gateway without touching DNS.
GATEWAY_IP=$(kubectl get gateway production-gateway -n envoy-gateway-system \
-o jsonpath='{.status.addresses[0].value}')
# Basic routing
curl -v --resolve "app.yourcompany.nl:443:$GATEWAY_IP" \
https://app.yourcompany.nl/
# HTTPS redirect (should return 301)
curl -v -H "Host: app.yourcompany.nl" http://$GATEWAY_IP/ \
-o /dev/null -w "%{http_code}\n"
# TLS certificate check
echo | openssl s_client -servername app.yourcompany.nl \
-connect $GATEWAY_IP:443 2>/dev/null | openssl x509 -noout -subject
Compare responses between the old and new endpoints:
INGRESS_IP=$(kubectl get svc ingress-nginx-controller -n ingress-nginx \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
diff \
<(curl -s --resolve "app.yourcompany.nl:443:$INGRESS_IP" https://app.yourcompany.nl/) \
<(curl -s --resolve "app.yourcompany.nl:443:$GATEWAY_IP" https://app.yourcompany.nl/)
If diff produces no output, the responses are identical. Test every route, not just the main one.
Step 9: cut over DNS
Once parallel testing is complete, switch DNS to the new Gateway IP.
24 hours before the switch, reduce DNS TTL to 60 seconds:
# Verify current TTL
dig app.yourcompany.nl +short +ttlid
Update the TTL through your DNS provider's control panel. Wait one full original-TTL cycle for caches to drain.
Execute the cutover: update the A/AAAA record to point at $GATEWAY_IP. With 60-second TTL, most clients will resolve the new IP within minutes.
Monitor during propagation: expect traffic on both endpoints for the duration of the TTL drain. Watch:
- HTTP 4xx/5xx error rate (should stay at baseline)
- TLS handshake errors (certificate mismatch if cert-manager is not ready)
- Latency percentiles (p50, p95, p99)
Rollback
If something goes wrong, update DNS back to the ingress-nginx IP. With 60-second TTL, rollback completes within a minute. The old Ingress resources are still active and ingress-nginx is still running.
Step 10: decommission ingress-nginx
Keep ingress-nginx running for at minimum 24 to 48 hours after the DNS cutover as rollback insurance. After the validation period:
# Delete Ingress resources (label them during migration for easy cleanup)
kubectl delete ingress -A -l migrated-to-gateway=true
# Remove ingress-nginx
helm uninstall ingress-nginx -n ingress-nginx
# Clean up the namespace
kubectl delete namespace ingress-nginx
Common pitfalls
Silent route rejection. The single most confusing failure mode. An HTTPRoute applies successfully (kubectl apply returns no error), but traffic does not flow. Always run kubectl describe httproute <name> and check the status conditions. Common reasons: hostname not matching any listener (NotAllowedByListeners), missing ReferenceGrant for cross-namespace references (RefNotPermitted), or a backend Service that does not exist (BackendNotFound).
cert-manager ignoring Gateway annotations. cert-manager's Gateway API support requires the config.enableGatewayAPI=true flag. Without it, cert-manager silently ignores all Gateway annotations. If you installed Gateway API CRDs after cert-manager, a restart of the cert-manager deployment is also required.
No configuration-snippet replacement. If your ingress-nginx setup relies on configuration-snippet for raw NGINX directive injection, there is no Gateway API equivalent. Each snippet must be analyzed individually. Some can be replaced by native HTTPRoute filters (header modification, redirects, rewrites). Others require implementation-specific policies like Envoy Gateway's EnvoyPatchPolicy. Some require application-level changes.
Rate limiting is not portable. The limit-rps, limit-rpm, and limit-connections annotations have no standard Gateway API equivalent. Rate limiting is always implementation-specific. If you change Gateway API implementations later, you will need to rewrite your rate limiting configuration.
CRD version conflicts. Installing Gateway API v1.5.x CRDs can cause Istio 1.28/1.29 to CrashLoopBackOff due to API field mismatches. If running those Istio versions, use Gateway API v1.4.0 CRDs and upgrade Istio first.
Multi-tenant TLS self-service gap. With ingress-nginx, each team could annotate their own Ingress resource to get TLS certificates from cert-manager. With a shared Gateway, only operators can modify listeners and TLS configuration. ListenerSet (experimental in cert-manager 1.20, stable expected in 1.21/1.22) restores per-team TLS autonomy. Plan for increased operator involvement in the interim.
DNS propagation is not instant. Even with 60-second TTLs, some resolvers cache aggressively. Both ingress-nginx and the Gateway must handle production traffic correctly during the overlap window. Never delete Ingress resources or stop ingress-nginx before DNS propagation completes.
Verify migration is complete
You know the migration succeeded when:
kubectl get ingress -Areturns no resources (all deleted)kubectl get httproute -Ashows all routes withAccepted: TrueandResolvedRefs: Truekubectl get certificates -n envoy-gateway-systemshows all certificatesREADY: True- Application monitoring confirms baseline latency and error rates
- ingress-nginx namespace no longer exists
- DNS records point to the Gateway IP with production TTLs restored
When to escalate
If you are stuck after following this guide, collect the following before asking for help:
- Output of
kubectl get gateway,httproute,referencegrant -A -o yaml - Output of
kubectl describe gateway <name> -n <namespace> - Output of
kubectl describe httproute <name> -n <namespace>(the status conditions are critical) - cert-manager logs:
kubectl logs -n cert-manager deployment/cert-manager --tail=100 - Gateway controller logs (for Envoy Gateway:
kubectl logs -n envoy-gateway-system deployment/envoy-gateway --tail=100) - The Gateway API CRD version installed:
kubectl get crd gateways.gateway.networking.k8s.io -o jsonpath='{.metadata.labels.gateway\.networking\.k8s\.io/bundle-version}' - Your Kubernetes version:
kubectl version --short