Skip to content
Tech News
← Back to articles

Types and Neural Networks

read original get Neural Network Starter Kit → more articles
Why This Matters

This article explores the challenges and potential solutions for integrating type safety directly into neural network-generated code, emphasizing the importance of producing correctly typed outputs for reliable programming. It highlights the limitations of current large language models that generate code post-hoc and discusses approaches to enforce types during inference, which could lead to more robust and trustworthy AI-driven coding tools.

Key Takeaways

Posted on April 20, 2026

Types and Neural Networks

[This is cross posted to the GLAIVE blog ]

Neural networks are used to generate increasingly more code in languages which enable highly generic and provably correct programming: Idris, Lean, and Agda, for example.

However, most frontier models generating the code – Large Language Models – separate the process of training from the process of typechecking. They are trained to produce output of a fixed type: List Token . To get valid code, that output is then during post-training parsed, in multiple ad-hoc ways, into the types of a particular language.

What are these ad-hoc ways? Do they work, and should we expect them to? And more importantly, can we rebuild LLMs from the ground up so that they’re trained to produce typed output?

Enforcing types after training

Today, LLMs are trained to predict the next token in a given training corpus, resulting in functions of type

LLM : List Token -> D ( List Token )

where D (List Token) here denotes a distribution over a list of tokens. LLMs consume an input prompt and produce this distribution one token at a time – sampling from the next-token distribution and feeding the result back in. For the purposes of this blog post, we’ll consider Token to simply be Maybe Char , i.e. either a character the model is able to generate, or a STOP token which signals the termination of the generative process.

... continue reading