Kubernetes Network Policies: Locking Down Pod-to-Pod Communication
Implement Kubernetes network policies from zero-trust to production. Covers default-deny, namespace isolation, egress control, debugging tools, and the common misconfigurations that leave clusters wide open.
By default, every pod in a Kubernetes cluster can talk to every other pod. Every pod can reach the internet. Every pod can query the Kubernetes API. This is fine for development. For production, it is a disaster waiting to happen — because it means a compromised pod in your logging namespace can reach your payment service, your database, and your credentials store.
Network policies are Kubernetes-native firewall rules that control traffic flow between pods. They are one of the most important security controls in a Kubernetes cluster, and one of the least implemented — because the syntax is confusing, the behavior is counterintuitive, and mistakes are silent.
This guide shows you how to implement network policies correctly, starting from zero — the default-deny approach that forces you to explicitly allow every connection.
The Default Problem: Everything Can Talk to Everything
Without network policies:
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Frontend│────▶│ API │────▶│ Database│
│ │ │ Server │ │ │
└─────────┘ └─────────┘ └─────────┘
│ │ │
├───────────────┤───────────────┤
│ Everything talks to everything
│ including:
│ - Frontend → Database (bypass API!)
│ - Any pod → Internet (data exfiltration)
│ - Any pod → K8s API (lateral movement)
└───────────────────────────────────────
With network policies:
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Frontend│────▶│ API │────▶│ Database│
│ │ ✅ │ Server │ ✅ │ │
└─────────┘ └─────────┘ └─────────┘
❌ → Database (blocked)
❌ → Internet (blocked unless explicitly allowed)
❌ → K8s API (blocked)
Step 1: Default Deny All Traffic
The single most important network policy is the one that blocks everything by default. Without this, all other policies are additive — they only add restrictions on top of full openness, which is backwards.
# default-deny-all.yaml
# Apply to EVERY namespace that hosts workloads
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Applies to ALL pods in this namespace
policyTypes:
- Ingress
- Egress
# No ingress or egress rules = block everything
# Apply to all workload namespaces
for ns in production staging monitoring; do
kubectl apply -f default-deny-all.yaml -n $ns
done
After applying default-deny, everything breaks. This is correct behavior. You now whitelist connections one by one. This is the zero-trust approach.
Step 2: Allow Only Required Connections
Frontend → API Server (Ingress)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-api
namespace: production
spec:
podSelector:
matchLabels:
app: api-server
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
API Server → Database (Egress + Ingress)
# Allow API server to reach the database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-to-database-egress
namespace: production
spec:
podSelector:
matchLabels:
app: api-server
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
---
# Allow database to receive connections from API server
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-ingress-from-api
namespace: production
spec:
podSelector:
matchLabels:
app: postgres
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: api-server
ports:
- protocol: TCP
port: 5432
Allow DNS (Required for Name Resolution)
# Without this, pods cannot resolve service names
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {} # All pods need DNS
policyTypes:
- Egress
egress:
- to: []
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Cross-Namespace Isolation
Isolate namespaces from each other — monitoring cannot reach production, staging cannot reach production.
# Allow monitoring to scrape production metrics (specific port only)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-prometheus-scrape
namespace: production
spec:
podSelector: {} # All pods in production
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: monitoring
podSelector:
matchLabels:
app: prometheus
ports:
- protocol: TCP
port: 9090 # Metrics port
Common Misconfigurations
| Misconfiguration | What Happens | Fix |
|---|---|---|
| Default-deny on ingress only | Egress is still wide open — data exfiltration possible | Always include both Ingress and Egress in policyTypes |
| Forgetting DNS egress | All service discovery breaks, pods cannot resolve names | Always allow UDP/TCP port 53 egress |
| Using wrong label selectors | Policy matches wrong pods or no pods | Verify with kubectl get pods -l app=api-server |
| Not labeling namespaces | namespaceSelector matches nothing | Label namespaces: kubectl label ns monitoring purpose=monitoring |
| CNI does not support policies | Policies are silently ignored | Verify your CNI (Calico, Cilium support policies; Flannel does not) |
Debugging Network Policies
# 1. Verify which network policies apply to a pod
kubectl get networkpolicy -n production
# 2. Check if a specific pod can reach another
kubectl exec -n production deploy/frontend -- \
wget -qO- --timeout=3 http://api-server:8080/health
# 3. Verify pod labels match policy selectors
kubectl get pods -n production --show-labels
# 4. Check CNI support (must not be Flannel)
kubectl get pods -n kube-system | grep -E 'calico|cilium|weave'
# 5. Use Cilium's network policy editor for visualization
# https://editor.networkpolicy.io/
Implementation Checklist
- Verify your CNI supports network policies (Calico or Cilium, NOT Flannel)
- Apply default-deny to ALL workload namespaces
- Allow DNS egress (port 53) in every namespace — without this, nothing works
- Map every legitimate connection path: service A → service B on port X
- Create explicit allow policies for each connection path
- Test every policy by running connectivity checks between pods
- Label namespaces for cross-namespace policies
- Block egress to the internet except for pods that genuinely need it
- Add network policy review to your deployment checklist for new services
- Monitor for blocked connections in your CNI logs (Calico/Cilium log denied traffic)