Skip to content
Tech News
← Back to articles

Functional Programmers need to take a look at Zig

read original get Zig Programming Language Book → more articles
Why This Matters

This article highlights Zig's potential as a compelling language for functional programmers, especially due to its flexible compile-time capabilities and minimal noise in expressing domain ideas. For the tech industry and consumers, adopting Zig could lead to more correct, efficient, and maintainable systems by reducing implementation complexity and surprises. Its features may influence future language design and improve software reliability.

Key Takeaways

Functional Programmers need to take a look at Zig.

I’ve been tinkering around with Zig to explore what’s possible with comptime. Whenever I evaluate a new language I use three axes:

How well can I express my ideas in this language. Or in other words, how easy is it for me to express the domain of the program. This is a test on how much noise is applied to the ideas I want to express in the program. Noise is anything that must be written for the program to function that is not relevant to the domain. For example, the canonical example of noise is the need to do manual memory management. We must allocate memory for the program to run, but this is orthogonal to the program’s domain; its an implementation detail. What facilities does the language provide me to create correct-by-construction systems and how easily can I program the type-system. This is essentially a test on how well I can program the language itself or how well I can create a deep embedding. What is the mean-time to a surprise. In the study of vacuum systems (think outer space) there is a concept called the mean-free path length or just mean-free path. The mean-free path of a vacuum system is the average distance a particle can travel in the system without experiencing a collision, it is essentially a metric of how good the vacuum is. When I apply this concept to programming languages I think of it as “How much code can I write before my implementation differs from my understanding of the system I am implementing”. This is why I frame this metric as a “surprise”; its “how many lines of code can I write until I experience a surprise”. And a surprise is a delta between what I think I’ve implemented and what I’ve actually implemented.

Enter Zig. I’m interested in Zig for a few reasons. First, I suspect that comptime is a simpler and more flexible system to achieve a lot of the type-system programming I’ve seen in the Haskell-verse and I’ve done enough Haskell (over 10 years) that programming the type system is now a hard requirement for me to take any language seriously.

Second, I am desperately trying to avoid writing a functional systems language. This is probably a blog post in its own right but the programming language industry has not grokked the meaning of monads. Monads are not some kind of obscure math-y thing that only the big brains think are necessary. No, instead monads are a fundamental abstract algebraic description of imperative programming as a computational context. They allow a programming language to not have a built-in notion of time (among other things). So if I want an imperative programming language I can implement MonadCont (the continuation monad), if I want a logic programming language I can implement LogicT (a monad that has non-deterministic semantics and backtracking). Not having a built-in notion of time means that my language is de-facto more expressive, allows users to mold the language to their needs, and improves the optimization ceiling compilers for that language can achieve.

So how does this connect with systems programming? Well, I’ve been radicalized. I’ve learned enough performance-oriented programming to be dissatisfied with the common functional languages (Haskell, OCaml, Common Lisp/Clojure, Scheme) because each of these languages are predicated on the existence of garbage collection and heaps. I think we are at the tail end of a large scale experiment with garbage collection. We can now look back on the last 30 years and conclude that garbage collection does communicate immense value by reducing noise, but the tradeoff is that the one ends up with a forest of pointers into the heap and that will always create a performance ceiling for the program and language implementation.

To exacerbate matters, I think there is a cognitive risk to garbage collection. Garbage collection makes it too easy to not think about or care about the underlying machine and runtime system. This has created a generation of developers who never gained or have lost the knowledge of how programs actually execute on a computational machine. Or to use less flowery language, just look at the era of software that garbage collectors have ushered in. Programs are bloated, slow, and wasteful compared to the literal super-computers that are running them. Surely we can do better.

Furthermore, I think the value proposition of garbage collectors has changed. The first garbage collector was innovated in LISP in 1957, but once they gained prominence in 1995 due to Java they proliferated, and for good reason. However, the machines of 2026 are much different than the machines of 1995 (but our languages aren’t ). Since 1995, compute on a CPU has grown something like 10,000 times faster while memory access timing has lagged. That was not the case in 1995. In 1995 these were roughly comparable. So we are in a situation where we are using languages designed for the machines of yesteryear that do not consider the machines of today. As an industry we (largely) have stopped innovating on new languages.

I once saw a talk by Steven Diehl that asked Where the next Programming Language will come from? that beautifully described the sad state of things. His main point is that the incentives for programming language innovation are at best misaligned and at worst non-existent. He states that we can assume there are three groups of people that are capable of innovation: Academics, Industry, and Hobbyists. But academics have no incentive to do the real-world engineering required to make a viable programming language, and any academics who decide to try are committing career suicide. Industry cannot fund any long-term projects (due to its culture of shareholder-value maximization) and are tied into sticky network effects. Hobbyists (generally) don’t have the time nor the economic means to make something real; which takes decades of full time work to accomplish. And so we are stuck with a local maxima.

Okay now back to Zig. I’m bullish on Zig because Zig (and its BDFL Andrew Kelley) are innovating and have the courage to innovate. Here are some innovations. Zig discourages the forest of pointers approach and encourages better manual memory management through Arenas and Allocators. This means that users have much more control over the memory management of their programs. This is just one reason why stuff written in Zig is so damn fast; Zig programs tend to exploit the machines of today better than the machines of yesteryear. Zig 0.16 just released and reworked the IO system to an interface design, here is the example from the release notes:

... continue reading