Skip to content
Tech News
← Back to articles

A tail-call interpreter in (nightly) Rust

read original more articles
Why This Matters

This article highlights the successful implementation of a high-performance tail-call interpreter in nightly Rust, leveraging the recently added 'become' keyword. It demonstrates how advanced Rust features can optimize emulation systems, offering potential benefits for developers creating efficient virtual machines and interpreters. The work underscores Rust's growing capabilities for systems programming and performance-critical applications, impacting both industry and consumer-facing tech.

Key Takeaways

A tail-call interpreter in (nightly) Rust

Last week, I wrote a tail-call interpreter using the become keyword, which was recently added to nightly Rust (seven months ago is recent, right?).

It was a surprisingly pleasant experience, and the resulting VM outperforms both my previous Rust implementation and my hand-coded ARM64 assembly. Tailcall-based techniques have been all the rage recently (see this overview); consider this my trip report implementing a simple but non-trivial system.

For those keeping track at home, this is the latest in my exploration of high-performance emulation of the Uxn CPU, which runs a bunch of applications in the Hundred Rabbits ecosystem.

If you want to read the whole saga, here's the list:

Experimenting with LLMs proved controversial, which wasn't a surprise; I'm pleased to declare that all of the tail-call code is human-written, and the new backend can be used as a substitute for the x86 assembly backend at a minor performance penalty.

(This blog post is also entirely human-written, per my personal standards)

The next few sections summarize previous work, so feel free to skim them if you've done the reading and jump straight to tailcalls in Rust.

Basics of Uxn emulation

Uxn is a simple stack machine with 256 instructions. The whole CPU has just over 64K of space, split between a few memories:

... continue reading