See how AWS Lambda arm64 stacks up against x86_64 in real-world benchmarks across CPU, memory, and I/O workloads using Node.js, Python, and Rust.
If you know me at all, you know I’m a big proponent of serverless services when they’re used for the right workloads. I helped launch a new product that increased usage by over 30x so far this year. Choosing serverless was a big part of why that launch went as smoothly as it did. Given how heavily I use Lambda, I have a vested interest in how its performance and architecture evolve.
Initially, Amazon Web Services (AWS) Lambda only supported x86_64-based compute. In 2021, AWS added support for arm64-based Graviton processors, which were advertised as offering equal or better performance at a lower price point and with a smaller environmental footprint.
Back in October 2023, AWS published a blog post titled "Comparing AWS Lambda Arm vs. x86 Performance, Cost, and Analysis." This post was a great reference at the time, but nearly two years later, I haven’t seen many follow-up benchmarks either on the official AWS blog or from the community. I’ve been wondering how things look in 2025 if you apply a similar methodology, which led me to build a more modern, generic benchmark of my own.
Going into this, I expected arm64 to be the most performant architecture and Rust to be the most performant runtime, but I wanted actual data to support my assumptions. So, I built a benchmark that runs Lambda functions on both x86_64 and arm64 architectures across CPU-intensive, memory-intensive, and light workloads, using the actively supported AWS runtimes for Node.js, Rust, and Python. While you should always benchmark and evaluate the performance of your real-world production workloads, generic benchmarks are always interesting for investigating general performance trends.
In this post, I’ll first highlight the high-level findings. Then I’ll walk through the benchmark design (workloads, runtimes, and configurations), and finally, I’ll dig into the detailed results. Unlike the AWS benchmark mentioned earlier, this project is fully open source and available on GitHub. You’re welcome to replicate my results, extend the tests, or adapt them to your own workloads. You can find the code in the aws-lambda-performance-benchmarks repository. The results of a recent benchmark run are also published to that repo.
Note: This benchmark includes the officially supported Rust runtime (announced GA on November 14, 2025) and the Python 3.14 runtime (announced GA on November 18, 2025). I’ll talk more about those under Runtimes below.
TLDR: The Winners
If you don’t have time to read the entire post, here are the key takeaways. I ran the benchmark several times in the us-east-2 (Ohio) region, and observed similar results across benchmark runs. The results shared in this post come from my most recent run, which tested 42 Lambda functions (7 runtimes × 2 architectures × 3 workloads). After collecting samples, I removed outliers using basic statistical techniques and calculated mean, median, and P50/P90/P95/P99 percentiles.
Performance champion: Rust on arm64 is the most performant and cost-efficient combination overall. There are a few instances where x86_64 Rust slightly beats out arm64 by a thin margin, but with a 20% cost discount, arm still wins at efficiency.
... continue reading