Deterministic Programming with LLMs
Introduction
If, like me, you regularly read programming blogs (and you do, otherwise you wouldn't be seeing these words), then you are well aware that our industry is currently deep in the throes of a drastic change and more than half of all those blog articles are currently involved in debating the best way for our industry to adapt to the advent of LLMs (large language models) that are capable of writing code.
Much has been said on the ethics of LLM coding, on the best approach for using it, and on how to use AI agents effectively. In this essay, I would like to make one contribution to this discussion: I would like to talk about how LLMs can be used in a deterministic way. I am not trying to suggest this is the only way to use them, but I do think it is valuable to think about it as one possible tool to use.
Mathematical Proof and LLMs
Before I look at my own industry, I want to take a look at what people are doing in a different area that also is affected by LLMs and AI and try to learn something from what they are doing. The field of math has been confronting questions about how to use AI and what they have done is illuminating.
LLMs are actually quite capable at writing things that look like mathematical proofs. In September 2024, Terence Tao (a Fields-medal-winning mathematician who has been active in looking at the use of AI in math) wrote that supervising an LLM was like: "trying to advise a mediocre, but not completely incompetent, (static simulation of a) graduate student." Very few human beings are capable of acting like a mediocre graduate student in math, and the LLMs have only gotten better in the past year-and-a-half.
But ultimately, LLMs are a device for producing what looks very much like other documents they have seen, and they are subject to hallucinations. This is especially dangerous in math proofs. Proofs often depend on very subtle differences and given a plausible sounding argument it is easy to fall into the trap of believing it. So it is not safe to accept LLM-written proofs as correct, nor is it particularly safe to ask mathematicians to read an LLM-written proof and expect them to catch all errors.
So mathematicians have turned to another computer-based tool: Lean (and other proof systems). In principle, mathematical proofs take starting axioms and definitions, apply logical inferences, and thereby reach a proven conclusion. Actual mathematicians don't generally operate at this level; if they did, then even simple 5-page proofs would be thousands of pages long with too many steps for anyone to hope to follow them and no way of getting any insight from it. But computers can. Lean (and other proof systems) produce just such a rigorous step-by-step proof, but they are not very widely used by professional mathematicians because writing proofs in Lean is actually very challenging.
You can see where this is going. In January of 2026, a team of people successfully got an LLM to create a proof of a meaningfully difficult problem never previously solved by humans. The problem was one of a large set of problems posed by Paul Erdős, but by accident this one had been written down incorrectly, which was only discovered a few months earlier.
... continue reading