Tech News
← Back to articles

Floating-Point Printing and Parsing Can Be Simple and Fast

read original related products more articles

Floating-Point Printing and Parsing Can Be Simple And Fast Floating Point Formatting, Part 3) Russ Cox

January 19, 2026

research.swtch.com/fp PDF Posted on Monday, January 19, 2026.

Introduction

A floating point number f has the form f = m · 2 e where m is called the mantissa and e is a signed integer exponent. We like to read numbers scaled by powers of ten, not two, so computers need algorithms to convert binary floating-point to and from decimal text. My 2011 post “Floating Point to Decimal Conversion is Easy” argued that these conversions can be simple as long as you don’t care about them being fast. But I was wrong: fast converters can be simple too, and this post shows how.

The main idea of this post is to implement fast unrounded scaling, which computes an approximation to x · 2 e · 10 p , often in a single 64-bit multiplication. On that foundation we can build nearly trivial printing and parsing algorithms that run very fast. In fact, the printing algorithms run faster than all other known algorithms, including Dragon4 [ ], Grisu3 [ ], Errol3 [ ], Ryū [ ], Ryū Printf [ ], Schubfach [ ], and Dragonbox [ ], and the parsing algorithm runs faster than the Eisel-Lemire algorithm [ ]. This post presents both the algorithms and a concrete implementation in Go. I expect some form of this Go code to ship in Go 1.27 (scheduled for August 2026).

This post is rather long—far longer than the implementations!—so here is a brief overview of the sections for easier navigation and understanding where we’re headed.

For the last decade, there has been a new algorithm for floating-point printing and parsing every few years. Given the simplicity and speed of the algorithms in this post and the increasingly small deltas between successive algorithms, perhaps we are nearing an optimal solution. Fixed-Point and Floating-Point Numbers

Fixed-point numbers have the form f = m · B e for an integer mantissa m , constant base B , and constant (fixed) exponent e . We can create fixed-point representations in any base, but the most common are base 2 (for computers) and base 10 (for people). This diagram shows fixed-point numbers at various scales that can represent numbers between 0 and 1:

Using a smaller scaling factor increases precision at the cost of larger mantissas. When representing very large numbers, we can use larger scaling factors to reduce the mantissa size. For example, here are various representations of numbers around one billion:

... continue reading