Network Observability with eBPF
Monitor network traffic at the kernel level using eBPF for deep observability without agent overhead. Covers eBPF fundamentals, Cilium Hubble, DNS monitoring, latency tracking, traffic flow visualization, and the patterns that give complete network visibility.
Traditional network monitoring relies on agents, log parsing, and packet captures that add overhead and miss context. eBPF (extended Berkeley Packet Filter) runs programs directly in the Linux kernel, observing every packet, system call, and network event with near-zero overhead. It gives you X-ray vision into your network.
eBPF Fundamentals
Traditional monitoring:
Application → Agent → Collector → Dashboard
Problem: Agent overhead, application changes needed, blind spots
eBPF monitoring:
Kernel Events → eBPF Program → User-space Collector → Dashboard
Advantage: No agent, no application changes, kernel-level visibility
How it works:
1. Write eBPF program (C-like, limited instruction set)
2. Verifier checks for safety (no loops, bounded memory)
3. JIT compiler compiles to native code
4. Attach to kernel hook (network, syscall, tracepoint)
5. Program runs on every event, writes to maps (shared memory)
6. User-space program reads maps for analysis
Cilium Hubble for Kubernetes
# Enable Hubble for network observability
apiVersion: cilium.io/v1alpha1
kind: CiliumConfig
metadata:
name: cilium
spec:
hubble:
enabled: true
relay:
enabled: true
ui:
enabled: true
metrics:
enabled:
- dns
- drop
- tcp
- flow
- icmp
- http
# Observe flows in real-time
hubble observe --namespace order-service
# Filter by verdict (dropped packets)
hubble observe --verdict DROPPED
# Filter by HTTP status
hubble observe --http-status 500
# DNS visibility
hubble observe --protocol DNS
# Export flows for analysis
hubble observe --output json > flows.json
DNS Monitoring
# eBPF DNS monitoring with bcc
from bcc import BPF
bpf_program = """
#include <linux/skbuff.h>
#include <linux/ip.h>
#include <linux/udp.h>
struct dns_event {
u32 pid;
u32 saddr;
u32 daddr;
char comm[16];
u16 qtype;
};
BPF_PERF_OUTPUT(dns_events);
int trace_dns(struct pt_regs *ctx, struct sock *sk) {
struct dns_event event = {};
event.pid = bpf_get_current_pid_tgid() >> 32;
bpf_get_current_comm(&event.comm, sizeof(event.comm));
dns_events.perf_submit(ctx, &event, sizeof(event));
return 0;
}
"""
# Detect:
# - Unexpected DNS queries (data exfiltration)
# - DNS resolution latency
# - DNS cache hit rates
# - Queries to blacklisted domains
TCP Connection Tracking
eBPF tracking per connection:
Source IP → Destination IP
Source Port → Destination Port
Bytes sent / received
Retransmissions
RTT (round-trip time)
Connection duration
TCP state transitions
Use cases:
- Service dependency mapping (who talks to whom)
- Latency attribution (network vs application)
- Connection failure analysis
- Bandwidth usage per service
Network Policy Verification
# Cilium NetworkPolicy with observability
apiVersion: cilium.io/v1alpha1
kind: CiliumNetworkPolicy
metadata:
name: order-service-policy
spec:
endpointSelector:
matchLabels:
app: order-service
ingress:
- fromEndpoints:
- matchLabels:
app: api-gateway
toPorts:
- ports:
- port: "8080"
protocol: TCP
egress:
- toEndpoints:
- matchLabels:
app: payment-service
toPorts:
- ports:
- port: "8080"
# Hubble shows dropped traffic that violates policy:
# hubble observe --verdict DROPPED --to-label app=order-service
# Lists every blocked connection attempt with source, reason
Anti-Patterns
| Anti-Pattern | Consequence | Fix |
|---|---|---|
| No network visibility | Blind to lateral movement, exfiltration | eBPF-based observability |
| Agent-based monitoring only | Overhead, blind spots in kernel | eBPF for kernel-level visibility |
| Allow-all network policies | No segmentation | Default-deny with Hubble for visibility |
| No DNS monitoring | DNS tunneling, data exfiltration invisible | eBPF DNS tracing |
| Monitoring only north-south traffic | East-west attacks undetected | Service mesh or eBPF for all traffic |
eBPF is the future of network observability. It provides kernel-level visibility without kernel modules, agents, or application changes, making it the most efficient way to understand what is happening on your network.