WarnHack
WarnHack
Hardening Kubernetes Clusters: Defending Against Docker-Based Infostealer Worms
Cloud & Infrastructure

Hardening Kubernetes Clusters: Defending Against Docker-Based Infostealer Worms

8 min read
0 views

The Reality of Container Escapes in Production

During a recent red-team engagement for a Mumbai-based fintech, I identified a persistent infostealer worm that had bypassed perimeter defenses by exploiting a mounted Docker socket in a CI/CD runner pod. When hardening CI/CD pipelines against such threats, we observed the worm utilizing CVE-2024-21626, a critical runc vulnerability, to escape the container and gain root access to the underlying worker node. This allowed the attacker to exfiltrate AWS IAM metadata tokens and Indian banking API keys stored in environment variables.

The threat landscape for Kubernetes (K8s) has shifted from simple crypto-mining to sophisticated data exfiltration. Infostealers now specifically target the internal K8s control plane to escalate privileges. If your cluster relies on default configurations, you are likely exposing sensitive internal service tokens to any pod that can achieve a container escape.

Identifying the Docker Socket Vulnerability

The most common vector for these worms is the exposure of /var/run/docker.sock. I frequently see developers mounting this socket to run "Docker-in-Docker" (DinD) for build pipelines. This is a catastrophic security failure. Anyone with access to that socket can issue commands to the host's Docker daemon, effectively gaining root access to the node.

We use the following command to audit clusters for this specific misconfiguration:



$ kubectl get pods --all-namespaces -o jsonpath='{.items[].spec.containers[].volumeMounts[?(@.name=="docker-sock")].mountPath}

'

If this returns any paths, those pods are immediate targets for infostealer worms. Once a worm hits a pod with a mounted socket, it spawns a new container with the --privileged flag, mounts the host's root filesystem, and installs a persistence layer.


What is Hardening in the Kubernetes Context?

Hardening is the process of eliminating every unnecessary communication path and privilege within the cluster. In a cloud-native environment, this means moving away from the "soft shell, hard center" security model. We assume the pod will be compromised. Hardening ensures that a compromised pod cannot talk to the API server, cannot reach other pods, and cannot see the host's kernel.

Reducing the attack surface involves stripping the container image of binaries like curl, wget or netcat. Infostealer worms rely on these tools to download their second-stage payloads. By using distroless images, we force the attacker to bring their own binaries, which is significantly easier to detect via runtime security tools.

The Impact of Unhardened Clusters on Indian Infrastructure

Many Indian startups utilize Managed Kubernetes services but neglect the "Shared Responsibility Model." While the cloud provider secures the control plane, the user is responsible for the worker node configuration and pod security. I have seen instances where outdated Node Group AMIs were left unpatched for months, leaving them vulnerable to CVE-2022-23648.

Under the DPDP Act 2023, a data breach resulting from such negligence can lead to penalties up to ₹250 crore. The Act mandates "reasonable security safeguards" to protect personal data. An unhardened K8s cluster, lacking basic network policies or RBAC restrictions, fails to meet this legal threshold.


Securing the Control Plane and API Server

The API server is the brain of your cluster. If an infostealer worm gains a service account token with cluster-admin privileges, the entire infrastructure is compromised. We must restrict access to the API server to only authorized IP ranges, often managed via a zero-trust terminal, and ensure that all communication is encrypted.

I recommend auditing your API server flags. Ensure that --anonymous-auth=false is set. By default, some older distributions allow anonymous discovery of API endpoints, which helps attackers map the cluster.

Hardening Kubelet Security on Worker Nodes

The Kubelet runs on every node and manages the pods. If the Kubelet is misconfigured, an attacker can use it to execute commands in any pod on that node. I observed a worm in a Bangalore-based dev-shop that used the Kubelet's read-only port (10255) to gather intelligence on all running workloads.

We must disable the read-only port and enforce authentication on the main Kubelet port (10250). Check your Kubelet configuration with this command:



$ grep -r "readOnlyPort" /var/lib/kubelet/config.yaml

Ensure readOnlyPort is set to

0

Additionally, always set --protect-kernel-defaults=true. This ensures that the Kubelet will not start if the host kernel parameters (like kernel.panic_on_oops) are not set to secure values, preventing certain types of denial-of-service attacks.

Implementing Robust Role-Based Access Control (RBAC)

The "Least Privilege" principle is often ignored in favor of developer velocity. I frequently find service accounts with get, list, and watch permissions on secrets across the entire cluster. This is an infostealer's dream.

We use the following command to identify what a specific service account can do:



$ kubectl auth can-i --list --as=system:serviceaccount:default:defaul

t

If the output shows access to secrets or configmaps in namespaces where the pod doesn't belong, the RBAC policy must be tightened. We should use RoleBinding instead of ClusterRoleBinding whenever possible to limit the scope of the permissions to a single namespace.


Network Hardening and Micro-segmentation

By default, every pod in Kubernetes can talk to every other pod. Infostealer worms use this flat network to scan for internal databases, Redis caches, or ElasticSearch instances that might not have authentication enabled.

Enforcing Network Policies for Egress Control

Infostealers need to exfiltrate data to a Command and Control (C2) server. Much like detecting C2 traffic in other environments, implementing strict egress Network Policies can block these outbound connections. I recommend a "Default Deny" egress policy for all production namespaces.

We implemented the following policy to prevent a worm from reaching external IPs while still allowing internal DNS resolution:


apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: restrict-egress-infostealer namespace: production spec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 10.0.0.0/8 # Allow internal traffic ports: - protocol: TCP port: 53 # DNS only - protocol: UDP port: 53

This configuration effectively kills the worm's ability to "phone home." Without a way to exfiltrate the stolen data, the worm's primary objective is neutralized even if it manages to infect a pod.

Utilizing Pod Security Standards (PSS)

Pod Security Policies (PSP) are deprecated. We now use Pod Security Standards (PSS) and Admission Controllers to enforce security at the time of pod creation. We must enforce the restricted profile for all non-system workloads.

The restricted profile prevents pods from:

  • Running as the root user.
  • Accessing the host network or IPC namespace.
  • Mounting host paths (hostPath volumes).
  • Escalating privileges via allowPrivilegeEscalation: true.

An infostealer worm attempting to exploit CVE-2024-21626 would be blocked by these policies because the exploit requires specific filesystem manipulations that are restricted under the restricted PSS profile.


Managing Secrets and Encryption at Rest

Kubernetes secrets are, by default, stored in etcd as base64 encoded strings—not encrypted. If an attacker gains access to the etcd backups or the etcd API, they have every secret in the cluster.

I recommend using a Cloud KMS (Key Management Service) provider to encrypt secrets at the application layer. For Indian infrastructure hosted on AWS (Mumbai region) or Azure (Central India), this integration is native. Furthermore, avoid passing secrets as environment variables. Infostealers can easily read /proc/1/environ to grab these keys. Use volume-mounted secrets instead, as they are harder to scrape en masse.

Automated Vulnerability Scanning

Static analysis of container images is the first line of defense. We integrate Trivy into the CI/CD pipeline to catch high-risk CVEs before they ever reach the container registry.



$ trivy image --severity HIGH,CRITICAL --scanners vuln,secret,misconfig

>

If the scan detects runc or containerd vulnerabilities in the base image, the build must fail. This prevents the "Initial Access" phase of the worm's lifecycle.


Compliance and Monitoring with CERT-In Mandates

The Indian Computer Emergency Response Team (CERT-In) has strict mandates regarding the retention of logs for 180 days. Infostealer worms often attempt to clear /var/log/containers/ to hide their footprint. To remain compliant and maintain visibility, we must stream logs to an external, immutable log aggregator. Utilizing a robust SIEM for log monitoring ensures that these records remain tamper-proof and accessible for forensic analysis.

Real-time Threat Detection with eBPF

Standard log monitoring is often too slow to catch a worm in action. We use eBPF-based tools like Falco to monitor system calls in real-time. We look for specific "indicators of compromise" (IOCs) such as:

  • Unexpected outbound connections from a database pod.
  • The execution of a shell inside a container (exec).
  • Modification of sensitive files like /etc/shadow or /root/.ssh/authorized_keys.
  • Spawning of a process with root privileges from a non-root container.

When Falco detects these events, we trigger an automated response to isolate the pod. This is critical because worms spread through the cluster in seconds; manual intervention is rarely fast enough.

Running Kube-bench for Continuous Auditing

To ensure our hardening efforts don't drift over time, we run kube-bench regularly. This tool checks the cluster against the CIS Kubernetes Benchmark.



$ kube-bench run --targets node,master --version 1.2

9

The output provides a clear pass/fail report for every security configuration. I've found that running this as a CronJob within the cluster and alerting on any "FAIL" status is the most effective way to maintain a hardened posture.


The Future of Kubernetes Security: Zero Trust

The shift toward Zero Trust in Kubernetes means we no longer trust the network, the identity, or the container runtime. We are seeing a move toward Kata Containers or Firecracker microVMs for high-risk workloads, which provide a much stronger isolation boundary than traditional namespaces.

For Indian enterprises handling sensitive financial data, the combination of the DPDP Act 2023 and the increasing sophistication of infostealer worms makes hardening a non-negotiable requirement. For teams looking to upskill, our Academy courses provide deep dives into cloud-native defense and security engineering. Security is not a one-time setup; it is a continuous cycle of auditing, patching, and monitoring.

Analyzing Worm Persistence Mechanisms

When we analyzed a sample of a recent K8s-native worm, we found it used ConfigMaps to store its configuration, allowing it to survive pod restarts. It also leveraged MutatingAdmissionWebhooks to inject its malicious payload into every new pod created in the cluster. This level of persistence requires a deep understanding of K8s internals, and our hardening strategies must be equally deep.

We must monitor for unauthorized changes to ValidatingWebhookConfigurations and MutatingWebhookConfigurations. These are high-value targets for any attacker looking to maintain long-term access to your infrastructure.



$ kubectl get mutatingwebhookconfigurations $ kubectl get validatingwebhookconfiguration

s

If you see webhooks that your team didn't explicitly create, your cluster control plane has likely been compromised.

Final Technical Insight

Always check the hostPath mounts in your cluster. If you find a pod mounting /, /etc, or /var/run/docker.sock, you have essentially granted that pod root access to your entire infrastructure. Use a Policy Engine like Kyverno or OPA Gatekeeper to block these mounts globally across your production environment.



Next Command: Run this to find all hostPath mounts in your cluster immediately

$ kubectl get pods --all-namespaces -o jsonpath='{range .items[]}{.metadata.namespace}{"\t"}{.metadata.name}{"\t"}{.spec.volumes[].hostPath.path}{"\n"}{end}

'

Startup-Friendly Pricing

Cybersecurity Tools for Small Teams

SIEM, secure terminal access, and hands-on training — built for startups and individuals.

Linux threat detection & response
Zero-trust browser SSH
Hands-on cybersecurity training
Made in India 🇮🇳
Early Access

Stay Ahead of Threats

Get the latest cybersecurity insights, tutorials, and threat intelligence delivered to your inbox.

Enjoyed this article?

Continue Reading

More Insights from WarnHack

View All Posts