Skip to content
Tech News
← Back to articles

We replaced Node.js with Bun for 5x throughput

read original get Bun JavaScript Runtime → more articles
Why This Matters

This article highlights the significant performance gains achieved by replacing Node.js with Bun in a latency-critical service, resulting in a 5x throughput increase and addressing memory leak issues. The switch underscores the importance of evaluating alternative runtimes for optimizing high-performance applications in the tech industry.

Key Takeaways

Update (March 30, 2026): Shortly after this post went live, Bun shipped a fix for the memory leak. 🔥

We replaced Node.js with Bun in one of our most latency-sensitive services and got a 5x throughput increase. We also found a memory leak that only exists in Bun's HTTP model.

The service is called Firestarter. It's our warm start connection broker: it holds thousands of long-poll HTTP connections from idle run controllers, each waiting for work. When a task run arrives, Firestarter matches it to a waiting controller and sends the payload through the held connection. No cold start, no container spin-up. It's in the critical path of every task execution on Trigger.dev.

The problem: Firestarter was using too much CPU. It was running on Node.js, spending 31% of its time inside a SQLite query, parsing every request with Zod, and converting headers with Object.fromEntries() on every GET. It worked, but it was slow.

It took four rounds of profiling to get there, and we hit a few Bun surprises we haven't seen documented elsewhere.

Phase 1: kill the SQLite query engine

The original connection manager was designed as a generic queryable store. It accepted arbitrary nested metadata, flattened it recursively into key-value pairs, and indexed everything in an in-memory SQLite database. Node 22 shipped with node:sqlite built-in, so it was zero-dependency. SQL gave us flexible partial matching on any combination of fields. It made sense at the time because we didn't know the access pattern yet.

Turns out the access pattern was always the same 4 fields. Every match attempt ran this query:

SELECT DISTINCT c.id, c.metadata FROM connections c JOIN metadata_index mi ON c.id = mi.connection_id WHERE c.id IN ( SELECT connection_id FROM metadata_index WHERE (key = ? AND value = ?) OR (key = ? AND value = ?) OR (key = ? AND value = ?) OR (key = ? AND value = ?) GROUP BY connection_id HAVING COUNT(DISTINCT key) = ? ) LIMIT 1

... continue reading