Kubernetes Container Escape With HostPath Mounts
Mounting the host filesystem into a container as a volume should keep you up at night if you work with Kubernetes. Let me explain why.
![](https://miro.medium.com/v2/resize:fit:700/1*34_SfcEcC9guWzfQkEtbpQ.png)
What is a “volume”?
Since containers should be ephemeral and stateless, they need some way to save data outside of the container. In some cases, they will even need persistent data storage that can be accessed even after a container restart.
There are many different volume types, such as the awsElasticBlockStore volume type. This external volume type will mount the EBS volume into your container. If your container restarts, the new one will mount the EBS volume to pick back up the data saved from the previous container.
But there is also the possibility of using local storage for persistence. This means using the Kubernetes worker node’s host filesystem. Using this local HostPath volume type introduces some interesting security implications. The Kubernetes documentation even calls out this specific warning:
Warning:
HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly.
If you are interested in gaining a deep understanding of how containers mounts work under the hood, I’d recommending looking into the linux namespace primitives, specifically the mount namespace. But that is beyond necessary for understanding the rest of this blog post.
How to create a volume?
A volume can be declared in a pod’s Kubernetes yaml manifest. You can specify .spec.volumes
along with .spec.containers[*].volumeMounts
to specify what kind of volume it is, and where to mount it inside of the container.
Here’s an example of a pod that creates a container that mounts the host’s root directory to /host
inside of the container.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: alpine
name: test-container
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: /host
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /
# this field is optional
type: Directory
If we run this pod and execute a shell into it, we can see that we have access to the host’s root filesystem.
![](https://miro.medium.com/v2/resize:fit:700/1*4hcbG3pgjSKHYja8z6BYQQ.gif)
So… anyone who has the capability to create a pod with unrestricted access to HostPath volumes can easily escalate their privileges.
How can this be exploited… for real?
Hopefully no-one is actually mounting the root filesystem directly into their containers. A more realistic example would be a HostPath volume scoped to a specific directory. For example lets modify the original example:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: alpine
name: test-container
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: /var/log/host
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /var/log
# this field is optional
type: Directory
This example is still flawed from a security perspective, but I’ve seen it legitimately used in professional spaces.
For whatever reason, this pod has access to the host’s /var/log
directory. This is particularly interesting once we consider how Kubernetes logging actually works. When running kubectl logs test-pd
the kubelet returns the contents of /var/log/pods/<path_to_0.log>. But from the host’s perspective, the pod’s 0.log file is a symlink. And since we mounted /var/log and have write access, we can overwrite the log file symlink to any arbitrary file. Let’s replace this test-pd’s log file symlink with a symlink to /etc/shadow.
![](https://miro.medium.com/v2/resize:fit:700/1*ozq55jouQq1VgCF0DsEbug.gif)
And there we have it, we can read the contents of /etc/shadow. The full contents fail to print since they are not in the expected log format, but this can be worked around using kubectl logs <pod> --tail=<line number>
to view the full contents.
Mitigations
There are a few ways to protect against potential misconfigurations relating to HostPath volumes.
- Scope the HostPath volume to a specific directory.
Be sure to specify a spec.volumes.hostpath.path
directory that is essential. Otherwise avoid using HostPaths altogether.
2. Ensure the HostPath volume is read only.
When mounting the volume you can set it to read only mode.
volumeMounts:
- mountPath: /var/log/host
name: test-volume
readOnly: true
*** Bonus Points: Use a container optimized OS like Google’s Container Optimized OS or AWS’s Bottlerocket, which include read only root filesystems by default.
3. Restrict access to HostPath volumes through an admission controller.
With PodSecurityPolicies now deprecated, and no definitive standard in place at the moment, I recommend using Open Policy Agent Gatekeeper or Kyverno to define policies around HostPath volumes.
Here’s a Kyverno ClusterPolicy that denies HostPath’s altogether.
Additional Resources
Theres a great Aquasec blog post that dive’s deeper into exploiting the logging HostPath misconfiguration.
More HostPath exploitations are described well by BishopFox.