Goal
At the end of this article you will have a set of NetworkPolicy resources that enforce a zero-trust network posture inside a Kubernetes namespace: all traffic denied by default, with explicit allow rules for the flows your application needs.
Prerequisites
- A Kubernetes cluster running v1.25 or later with
kubectlaccess and permissions to create NetworkPolicy objects - A CNI plugin that supports NetworkPolicy enforcement (see the next section). Without a supporting CNI, NetworkPolicy objects are stored in the API but have zero effect on traffic.
- At least one application deployed behind a Service. NetworkPolicy selects pods by label, but you will use Services for connectivity testing. If you need a refresher on how Services expose pods, the linked article covers all four types.
kubectlconfigured for the target cluster
CNI requirement
This is the prerequisite most people skip, and it leads to the worst failure mode: policies that look correct but do nothing.
NetworkPolicy objects are part of the Kubernetes API, but the API server only stores them. Enforcement is the CNI plugin's job. If your CNI does not implement NetworkPolicy, creating these objects is a silent no-op. No error, no warning, no event.
CNIs that enforce NetworkPolicy:
| CNI | Notes |
|---|---|
| Calico | Most widely deployed for network policy. Adds GlobalNetworkPolicy CRDs for cluster-wide rules. |
| Cilium | eBPF-native. Also supports CiliumNetworkPolicy for L7 (HTTP path, DNS-based) filtering beyond what the standard API offers. |
| Antrea | Also supports the emerging AdminNetworkPolicy API. |
| Weave Net | Simpler setup, automatic mesh encryption. |
| Canal | Flannel for routing + Calico for policy. Retrofits NetworkPolicy onto Flannel-based clusters. |
CNIs that do NOT enforce NetworkPolicy:
- Flannel: handles inter-node routing only. Flannel's repository explicitly states it does not implement NetworkPolicy.
- Kubenet: Kubernetes' built-in lightweight option. No policy enforcement.
Managed Kubernetes specifics:
- GKE: requires explicitly enabling "Network policy" or using Dataplane V2 (Cilium-based).
- EKS: the AWS VPC CNI added native NetworkPolicy support in late 2023. Calico and Cilium are common overlay alternatives.
- AKS: uses Azure CNI with Calico or Cilium. The policy engine must be selected at cluster creation.
- k3s: ships with Flannel by default (no NetworkPolicy). Switching to Calico or Canal is required for enforcement.
Verify your CNI before writing any policies. I have seen teams spend days debugging "broken" policies on clusters running Flannel, where the policies were syntactically valid but had zero enforcement behind them.
How NetworkPolicy works
A pod becomes isolated for a traffic direction (ingress, egress, or both) the moment at least one NetworkPolicy selects it and includes that direction in policyTypes. Once isolated, all traffic in that direction is denied unless an explicit rule permits it.
Policies are additive. Multiple policies selecting the same pod accumulate: the allowed traffic is the union of all their rules. There is no precedence, no ordering, no conflict resolution. A permissive rule in one policy cannot be overridden by a restrictive rule in another.
A few details that matter in practice:
- NetworkPolicy operates at OSI layer 3/4 (IP address + TCP/UDP/SCTP port). It cannot inspect HTTP headers, paths, or DNS names. For L7 filtering, use Cilium's
CiliumNetworkPolicyor a service mesh. - For a connection to succeed, both the egress policy on the source pod and the ingress policy on the destination pod must allow it.
- Pods with
hostNetwork: truebypass NetworkPolicy entirely. They use the node's IP and are treated as node traffic. - Existing connections are not torn down when a policy is applied. Only new connections are evaluated.
Step 1: apply a default deny-all policy
Start with a zero-trust baseline. This policy selects every pod in the namespace (empty podSelector: {}) and declares both Ingress and Egress in policyTypes with no allow rules:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # selects all pods in this namespace
policyTypes:
- Ingress
- Egress
# no ingress or egress rules = deny everything
Apply it:
kubectl apply -f default-deny-all.yaml
Expected result: every pod in the production namespace is now isolated. No inbound connections, no outbound connections.
This breaks DNS. Pods cannot resolve service names because DNS queries (UDP/TCP port 53 to kube-dns in kube-system) are blocked by the egress deny. You fix this in the next step.
Because NetworkPolicy is namespace-scoped, this policy only affects the production namespace. Repeat it in every namespace you want to isolate.
Step 2: allow DNS egress
Without DNS, nothing works. Apply this policy in the same namespace to allow all pods to query kube-dns:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
The namespaceSelector and podSelector sit inside the same to item. That is AND logic: only pods labeled k8s-app: kube-dns in the kube-system namespace are targeted. If they were separate list items, it would be OR logic and match far more than intended. This AND vs OR distinction is the single most common NetworkPolicy misconfiguration.
The kubernetes.io/metadata.name label is automatically set on every namespace since Kubernetes 1.21. It is immutable and is the canonical way to target a namespace by name.
kubectl apply -f allow-dns-egress.yaml
Verify DNS works:
kubectl run dns-test --image=nicolaka/netshoot --namespace=production \
--rm -it --restart=Never -- nslookup kubernetes.default
# Expected: Name: kubernetes.default, Address: 10.96.0.1 (your cluster IP)
Step 3: allow specific ingress traffic
With the deny-all baseline in place, layer on explicit allow rules for the traffic your application needs.
Allow ingress from specific pods by label
This policy lets only pods labeled app: frontend in the same namespace reach app: backend pods on port 8080:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
When from contains only a podSelector with no namespaceSelector, it matches pods in the same namespace as the policy.
Allow ingress from a specific namespace
Let monitoring tools in the monitoring namespace scrape metrics:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring-scrape
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
ports:
- protocol: TCP
port: 9090
Allow ingress from a specific pod in a specific namespace (AND logic)
To allow only Prometheus pods in the monitoring namespace (not all pods there):
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app: prometheus
ports:
- protocol: TCP
port: 9090
Both selectors are in the same from item (same indentation level). Both must match. Compare this to the OR pattern where they are separate list items:
# OR logic — allows ANY pod in monitoring OR any prometheus pod in ANY namespace
from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
- podSelector:
matchLabels:
app: prometheus
The difference is one YAML indentation level. Get it wrong and you open traffic to far more sources than intended.
Step 4: configure egress rules
Egress policies control what outbound traffic pods are allowed to send.
Allow egress to a specific internal service
This policy lets frontend pods talk to backend pods on port 8080 and nothing else (besides DNS, covered by the policy from Step 2):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-egress
namespace: production
spec:
podSelector:
matchLabels:
app: frontend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: backend
ports:
- protocol: TCP
port: 8080
Allow egress to an external API endpoint
When a pod needs to reach an external service (a payment gateway, an API provider), use ipBlock:
egress:
- to:
- ipBlock:
cidr: 203.0.113.0/24 # payment gateway CIDR
ports:
- protocol: TCP
port: 443
Named ports: NetworkPolicy ports entries can reference a named port from the container spec (e.g., port: http matching containerPort: 80, name: http). This makes policies resilient to port number changes.
Port ranges are supported since Kubernetes 1.25 using the endPort field:
ports:
- protocol: TCP
port: 32000
endPort: 32768
Step 5: isolate namespaces
The most common multi-tenant pattern: pods within a namespace can communicate freely, but no cross-namespace traffic is allowed.
Ingress isolation
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: namespace-isolation-ingress
namespace: team-a
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {} # any pod in the SAME namespace
A podSelector: {} without an accompanying namespaceSelector matches only pods in the same namespace as the policy.
Egress isolation (with DNS)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: namespace-isolation-egress
namespace: team-a
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- podSelector: {} # intra-namespace only
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Allow a shared service to receive traffic from all namespaces
For centralized logging, monitoring, or an ingress controller:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-all-namespaces
namespace: shared-services
spec:
podSelector:
matchLabels:
app: log-aggregator
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector: {} # all namespaces
ports:
- protocol: TCP
port: 5044
Namespace isolation via NetworkPolicy is one layer of a multi-tenant architecture. For full isolation, combine it with RBAC to prevent namespace privilege escalation and PodSecurity admission to block hostNetwork: true (which bypasses NetworkPolicy entirely).
Step 6: test your policies
Policies that look correct on paper can silently fail. Test both directions: confirm allowed traffic succeeds and blocked traffic is rejected.
Verify labels and policies
kubectl get pods -n production --show-labels
kubectl get namespace --show-labels
kubectl get networkpolicies -n production
kubectl describe networkpolicy allow-frontend-to-backend -n production
Test connectivity with a debug pod
The nicolaka/netshoot image provides curl, nc, nslookup, dig, and other network tools:
# Spawn a pod with the frontend label
kubectl run test-frontend \
--image=nicolaka/netshoot \
--namespace=production \
--labels="app=frontend" \
--rm -it --restart=Never -- /bin/bash
Inside the pod:
# This should SUCCEED (frontend → backend on 8080 is allowed)
curl -v --connect-timeout 5 http://backend-service:8080/health
# This should TIME OUT (frontend → database on 5432 is not allowed)
curl -v --max-time 5 telnet://postgres-service:5432
# Expected: curl: (28) Connection timed out after 5001 milliseconds
Blocked connections time out rather than returning a TCP RST. NetworkPolicy silently drops packets at the network layer.
CNI-specific observability
Cilium with Hubble:
# Watch dropped flows in real time
hubble observe --namespace production --verdict DROPPED
# Trace a specific connection
cilium policy trace \
--src-k8s-pod production:test-frontend \
--dst-k8s-pod production:backend-pod \
--dport 8080 --protocol TCP
Calico:
calicoctl get networkpolicy --all-namespaces
kubectl logs -n calico-system -l k8s-app=calico-node | grep -i denied
Cilium also supports an audit mode that logs policy violations instead of dropping traffic. This is the safest way to roll out policies on existing production workloads: apply policies in audit mode, watch Hubble for AUDIT verdicts, fix rules, then switch to enforcement.
Complete example: three-tier application
Bringing everything together for a namespace with frontend, backend, and database tiers:
# 1. Default deny-all
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes: [Ingress, Egress]
---
# 2. Allow DNS for all pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
policyTypes: [Egress]
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
---
# 3. Allow ingress-nginx to reach frontend on port 80/443
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-frontend
namespace: production
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes: [Ingress]
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
---
# 4. Frontend → backend on port 8080
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
tier: backend
policyTypes: [Ingress]
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 8080
---
# 5. Backend → database on port 5432
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-backend-to-db
namespace: production
spec:
podSelector:
matchLabels:
tier: database
policyTypes: [Ingress]
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 5432
---
# 6. Frontend egress to backend only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-egress
namespace: production
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes: [Egress]
egress:
- to:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 8080
---
# 7. Backend egress to database only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-egress
namespace: production
spec:
podSelector:
matchLabels:
tier: backend
policyTypes: [Egress]
egress:
- to:
- podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432
Verify the full chain:
kubectl get networkpolicies -n production
# Expected: 7 policies listed
kubectl describe networkpolicy default-deny-all -n production
# PodSelector: <none> (matches all)
# Allowing ingress traffic: <none>
# Allowing egress traffic: <none>
Common mistakes
- Forgetting DNS when denying egress. Every application that resolves service names breaks silently. Always deploy the DNS allow rule alongside or before the deny-all policy.
- AND vs OR selector confusion. A single
fromitem with bothnamespaceSelectorandpodSelector= AND. Separate list items = OR. The difference is one indentation level in YAML. - Deploying policies on a CNI that does not support them. Flannel and kubenet accept the objects without error but enforce nothing. Verify your CNI first.
- Missing namespace labels. Cross-namespace policies using
namespaceSelectormatch nothing if the target namespace is not labeled. Verify withkubectl get namespace --show-labels. - Skipping
policyTypes. IfpolicyTypesis omitted, Kubernetes infers it: Ingress is always assumed, Egress only if explicit egress rules exist. A deny-egress policy with no egress rules and nopolicyTypes: [Egress]does nothing for egress. - Ignoring
hostNetworkpods. Pods withhostNetwork: truebypass NetworkPolicy. If a compromised workload gains this capability, it communicates freely. - Not testing blocked paths. Confirming that allowed traffic works is not enough. Explicitly verify that traffic outside your allow rules times out.
Known limitations
The standard NetworkPolicy API as of Kubernetes v1.35 cannot:
- Apply cluster-scoped policies (the emerging AdminNetworkPolicy API, currently v1alpha1/v1beta1, addresses this)
- Express negative/deny rules (except
ipBlock.exceptCIDR exclusions) - Filter at Layer 7 (HTTP method, path, headers)
- Filter egress by DNS name
- Target NodePorts on nodes
- Affect traffic to/from
hostNetwork: truepods - Terminate existing connections when policies change