This post is Part 2 of our educational series on Amazon ECS security. In Part 1 – Under the Hood of Amazon ECS on EC2, we explored how the ECS agent, IAM roles and the ECS control plane provide credentials to tasks. Here we’ll demonstrate how those mechanisms can lead to a known risk when tasks with different privilege levels share the same EC2 host. This cross-task credential exposure highlights the inherent risks of relying on per-task IAM scoping and task execution boundaries when workloads share the same EC2 instance, and it underscores why Fargate (where each task runs in its own micro‑VM) offers stronger isolation. Our goal is to help you understand the IAM privilege boundaries in Amazon ECS and how to configure your services securely.
Cloud environments offer unparalleled flexibility and scalability, but they also introduce complex security considerations. Subtle interactions between AWS services can lead to unintended privilege exposure if you assume container‑level isolation where none exists. In this post, I’ll walk you through how I discovered this cross‑container IAM credential exposure in Amazon ECS (Elastic Container Service), demonstrate the technique (dubbed “ECScape”), and share lessons learned for securing your own environments. By demystifying the control plane, we hope to equip readers with the knowledge to architect for isolation and to advocate for improvements.
Resources:
TL;DR
Key Insight: We identified a way to abuse an undocumented ECS internal protocol to grab AWS credentials belonging to other ECS tasks on the same EC2 instance. A malicious container with a low‑privileged IAM role can obtain the permissions of a higher‑privileged container running on the same host.
We identified a way to abuse an undocumented ECS internal protocol to grab AWS credentials belonging to other ECS tasks on the same EC2 instance. A malicious container with a low‑privileged IAM role can obtain the permissions of a higher‑privileged container running on the same host. Real‑World Impact: In practice, this means a compromised app in your ECS cluster could assume the role of a more privileged task by stealing its credentials — as long as they are running on the same instance. Beyond task roles, the task execution role — which AWS documentation states is not accessible to task containers — is also exposed, since it’s assumed and used by the agent. This role often has sensitive permissions like pulling secrets from AWS Secrets Manager or accessing private container registries. Once compromised, it can be abused to extract secrets or artifacts.
In practice, this means a compromised app in your ECS cluster could assume the role of a more privileged task by stealing its credentials — as long as they are running on the same instance. Beyond task roles, the task execution role — which AWS documentation states is not accessible to task containers — is also exposed, since it’s assumed and used by the agent. This role often has sensitive permissions like pulling secrets from AWS Secrets Manager or accessing private container registries. Once compromised, it can be abused to extract secrets or artifacts. How It Works: Amazon ECS tasks retrieve credentials via a local metadata service (169.254.170.2) with a unique credentials endpoint for each task. We discovered that by leveraging how ECS identifies tasks in this process, a malicious actor could masquerade as the ECS agent and obtain credentials for any task on the host. No container breakout (no hostroot access) was required – however IMDS access was required via clever network and system trickery from within the container’s own namespace. Accessing IMDS lets our container impersonate ECS agent and ECS has documentation on how that can be prevented.
Amazon ECS tasks retrieve credentials via a local metadata service (169.254.170.2) with a unique credentials endpoint for each task. We discovered that by how ECS identifies tasks in this process, a malicious actor could masquerade as the ECS agent and obtain credentials for any task on the host. No container breakout (no hostroot access) was required – however IMDS access was required via clever network and system trickery from within the container’s own namespace. Accessing IMDS lets our container impersonate ECS agent and ECS has documentation on how that can be prevented. Stealth Factor: The stolen keys work exactly like the real task’s keys. AWS CloudTrail will attribute API calls to the victim task’s role, so initial detection is tough – it appears as if the victim task is performing the actions.
The stolen keys work exactly like the real task’s keys. AWS CloudTrail will attribute API calls to the victim task’s role, so initial detection is tough – it appears as if the victim task is performing the actions. Mitigations: If you run ECS on EC2, avoid deploying high-privilege tasks alongside untrusted or low-privilege tasks on the same instance. Consider dedicated hosts or node isolation for critical services, or use AWS Fargate (each task in its own microVM) for true separation. Disable or restrict IMDS access for tasks wherever possible (block 169.254.169.254, enforce IMDSv2, or use the ECS ECS_AWSVPC_BLOCK_IMDS setting). Also enforce least privilege on all task IAM roles and drop unneeded Linux capabilities (to limit what a compromised container can do). We’ll cover best practices and detection hints later in this post.
Quick refresher: on EC2 launch type, the ECS control plane assumes each task role, pushes those credentials over the ACS WebSocket to the agent, and the agent serves them to tasks via 169.254.170.2. ECScape abuses that delivery path.
... continue reading