Encore started as a Go framework with a Go runtime, Go CLI, Go parser, and Go compiler. When we decided to support TypeScript, the straightforward choice would have been to write the runtime in TypeScript too, or extend the Go runtime with some kind of bridge, but we ended up writing a new runtime from scratch in Rust.
There were two reasons for this beyond what the Go sidecar prototype showed us (more on that below). First, we knew we wanted to extend Encore to more languages over time, and we'd seen projects like Prisma and Pydantic successfully use a Rust core with bindings into Node.js and Python respectively. Writing the core logic once in Rust and binding it to each language runtime meant we wouldn't be reimplementing infrastructure handling for every language we add. Second, Node.js is fundamentally single-threaded. By moving everything that isn't business logic into Rust, the HTTP request lifecycle, database connection management, pub/sub, tracing, all of it runs fully multi-threaded on tokio. That's a performance gain that isn't achievable within Node.js itself.
Two years and 67,000 lines later, the runtime handles the full HTTP request lifecycle (routing, request parsing and validation, response serialization), database connection pooling and querying, pub/sub across three cloud providers, distributed tracing, metrics collection, object storage, caching, and an API gateway powered by Pingora. The TypeScript code your application runs is the business logic, and everything underneath it is Rust. This post walks through the decisions that got us here, the problems that weren't obvious going in, and what we'd do differently.
The Go runtime worked well for Go applications and still does. It compiles into the application binary and handles infrastructure concerns at the framework level. The obvious approach for TypeScript support would have been to run the Go runtime as a sidecar process alongside Node.js, with the two communicating over IPC.
We prototyped this and the latency overhead of serializing every database query, pub/sub message, and trace event across a process boundary added up fast. A single API request that touches a database and publishes an event would cross the IPC boundary six or seven times, and in benchmarks the sidecar approach added 2-4ms of overhead per request just from serialization and context switching, before any actual work happened.
The other issue was operational. Two processes means two things to monitor, two things that can crash independently, two sets of logs to correlate. For local development that's manageable, but in production across dozens of services the failure modes multiply.
So the runtime needed to live in the same process as the Node.js event loop, which meant either writing it in C/C++ with N-API bindings or in Rust with napi-rs. Rust gave us the same safety guarantees (memory safety, thread safety, no data races) that the Go runtime had, plus access to the async ecosystem (tokio) for handling thousands of concurrent connections without blocking the Node.js event loop.
TypeScript support was developed incrementally over several months of internal work across many pull requests. The public release (#1073) shipped three Rust crates together: the core runtime, the JavaScript NAPI bindings, and a TypeScript parser. The framework only works when all of these pieces are in place, so the release had to be atomic even though the development wasn't.
The core runtime ( runtimes/core ) is structured as a set of managers, each responsible for one infrastructure concern:
pub struct Runtime { api: api::Manager, sqldb: sqldb::Manager, pubsub: pubsub::Manager, objects: objects::Manager, metrics: metrics::Manager, secrets: secrets::Manager, }
... continue reading