Kubernetes · 9 min read min read

Zero-Trust Kubernetes: Network Policy From First Principles

Default Kubernetes networking trusts everything. Here's how to implement zero-trust with network policies, service mesh, and proper segmentation.

THNKBIG Team

Engineering Insights

Zero-Trust Kubernetes: Network Policy From First Principles

Traditional network security assumes threats come from outside. Zero-trust eliminates that assumption entirely — every connection must be verified, regardless of where it originates. This guide shows how to implement zero-trust network security in Kubernetes from first principles.

Why Kubernetes Needs Zero-Trust Networking

By default, every pod in a Kubernetes cluster can reach every other pod. There are no firewall rules between namespaces. A compromised pod has unrestricted lateral movement to databases, internal APIs, and other services. In a cluster running dozens of applications, this flat network topology creates enormous blast radius.

Zero-trust network policy inverts this: deny everything by default, then explicitly allow only the traffic that serves a documented, legitimate business purpose.

Layer 1: Kubernetes NetworkPolicy

Kubernetes NetworkPolicy is the built-in tool for pod-level traffic control. It works at L3/L4 (IP and port) and is enforced by your CNI plugin. Calico and Cilium both support NetworkPolicy; Flannel and some older CNIs do not, so verify before relying on it.

The correct starting posture is a default-deny policy in every namespace. This single policy blocks all ingress and egress by default. Then add explicit allow rules for each required communication path:

  • Allow ingress to web pods only from the ingress controller namespace
  • Allow database pods to accept connections only from specific application pods
  • Allow egress to DNS (port 53, kube-dns) for any pod that needs name resolution
  • Allow egress to external APIs only for specific pods with a documented need

Layer 2: Cilium Network Policy (L7)

Standard Kubernetes NetworkPolicy operates at L3/L4 — it can block connections by IP and port, but it can't inspect HTTP methods, paths, or DNS names. Cilium extends this to L7 with CiliumNetworkPolicy resources. This lets you write rules like: allow GET requests to /api/health from the monitoring namespace, but block POST requests to any path from that same namespace.

Cilium also supports DNS-name-based egress policy, which is essential for clusters that need to reach external services by hostname rather than IP. Without this, managing egress by CIDR becomes brittle as cloud provider IPs rotate.

Layer 3: Mutual TLS with a Service Mesh

NetworkPolicy enforces which pods can communicate. Mutual TLS (mTLS) verifies the identity of both endpoints in any communication. Combined, they implement zero-trust: only authorized pods can connect, and each side cryptographically proves its identity.

Istio and Linkerd both provide automatic mTLS between all enrolled pods with no application code changes. Istio's PeerAuthentication resource enforces STRICT mode — rejecting any plaintext connection in the mesh. This gives you encrypted in-transit communication and mutual authentication with zero application-level changes.

Layer 4: RBAC and Workload Identity

Zero-trust applies to the Kubernetes control plane too — not just pod-to-pod network traffic. Every workload that calls the Kubernetes API must use a dedicated service account with minimal permissions. Never use the default service account. Never grant cluster-admin to application workloads.

  • One service account per workload — no sharing between applications or environments
  • Namespace-scoped roles only — avoid ClusterRoles unless cluster-wide access is genuinely required
  • Token volume projection — use projected service account tokens with bounded audience and expiry, not long-lived static tokens

Layer 5: Admission Control and Policy Enforcement

Zero-trust security breaks down if any pod can bypass your controls by requesting privileged mode or host network access. Admission controllers prevent misconfigured workloads from entering the cluster. OPA Gatekeeper and Kyverno both work as validating admission webhooks that can reject pods violating your security policies before they're scheduled.

Minimum admission policies for zero-trust clusters: block privileged containers, block hostPath mounts, require read-only root filesystems, require non-root user IDs, require resource limits on all containers, prevent latest image tags (enforce digest pinning).

Implementing Zero-Trust Kubernetes with THNKBIG

Zero-trust Kubernetes implementation requires expertise across networking, service mesh, admission control, and identity systems. THNKBIG has designed and implemented zero-trust architectures for government contractors, healthcare organizations, and financial services firms that require documented compliance postures. We bring pre-built policy libraries, architecture patterns, and hands-on implementation experience to accelerate your zero-trust journey. Contact us to get started.

Your write-up clearly explains why Kubernetes' default "allow all" pod networking is dangerous and how to move toward a zero-trust model using layered controls.

Key reinforcement points and minor refinements:

  1. Zero-Trust Principle in Kubernetes
  • Treat every pod-to-pod connection as untrusted by default.
  • Move from implicit trust (flat cluster network) to explicit trust (only defined flows are allowed).
  • Combine identity (labels, service accounts, SPIFFE IDs) with least privilege (minimal allowed connections).
  1. NetworkPolicy as the Core Control
  • Your description of NetworkPolicy behavior is accurate:
  • No policy selecting a pod → all traffic allowed.
  • Any policy selecting a pod → default deny for directions (ingress/egress) covered by that policy, except what is explicitly allowed.
  • Emphasize that default-deny must be done separately for ingress and egress; many teams only do ingress and leave egress wide open.
  1. Concrete Default-Deny Examples

Ingress default deny (namespace-wide):

```yaml

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

name: default-deny-ingress

namespace: production

spec:

podSelector: {}

policyTypes:

  • Ingress

```

Ingress + egress default deny:

TB

THNKBIG Team

Engineering Insights

Expert infrastructure engineers at THNKBIG, specializing in Kubernetes, cloud platforms, and AI/ML operations.

Ready to make AI operational?

Whether you're planning GPU infrastructure, stabilizing Kubernetes, or moving AI workloads into production — we'll assess where you are and what it takes to get there.

US-based team · All US citizens · Continental United States only