This post is a rewrite of my earlier blog post from Nov 2023 with many new insights and updates.
┬─┬ ┬─┬────────── ┬─┬─┬ ┬─┬──────────── └─┤ │ │ ──┬────── └─┤ │ │ │ ──┬──────── │ │ │ ┬─┼────── └─┤ │ │ ──┼─┬────── │ │ │ └─┤ ┬─┬── │ │ │ ┬─┼─┼────── │ │ │ │ ┼─┼─┬ │ │ │ └─┤ │ ┬─┬── │ │ │ │ │ ├─┘ │ │ │ └─┤ ┼─┼─┬ │ │ │ │ ├─┘ │ │ │ │ │ ├─┘ │ │ │ ├─┘ │ │ │ │ ├─┘ │ │ ├───┘ │ │ │ ├─┘ │ ├─┘ │ │ ├─────┘ └─┘ │ ├─┘ └─┘
Most people believe 264-1 = 18446744073709551615, or 0xFFFFFFFFFFFFFFFF in hexadecimal, to be the largest number representable in 64 bits. In English, it’s quite the mouthful: eighteen quintillion four hundred forty-six quadrillion seven hundred forty-four trillion seventy-three billion seven hundred nine million five hundred fifty-one thousand six hundred fifteen.
That is indeed the maximum possible value of 64 bit unsigned integers, available as data type uint64_t in C or u64 in Rust. Floating point data types can represent much larger values, courtesy of their base 2 exponent. The 64-bit double floating point format has a largest (finite) representable value of 21024(1-2-53) ~ 1.8*10308.
What if we allow representations beyond plain data types? Since we want representations to remain computable, the most general kind of representation would be a program in some programming language. But the program must be small enough to fit in 64 bits.
The largest number programmable in 64 bits
The smallest possible valid C program is “main(){}”, consisting of 8 ASCII characters. ASCII is a 7-bit character encoding standard representing 128 unique characters, but all modern computers use 8-bit bytes to store either plain ASCII or UTF-8, a Unicode character encoding that’s backward compatible with ASCII. So we’ll consider the above all-scaffold do-nothing program to be the only valid 64-bit C program.
Plenty other languages require no such scaffolding. For instance, Linux features the arbitrary precision calculator bc. It happily computes the 954242 digit number 9^999999 = 35908462…48888889, making it programmable in 8 bytes. So is the much larger 9^9^9^99 = 9^(9^(9^99)) with over 10^10^953 digits, which bc is less happy to compute. If bc supported the symbol ! for computing factorials, then 9!!!!!!! would represent a much larger number still.
Allowing such primitives feels a bit like cheating though. Would we allow a language that has the Ackerman function predefined, letting the 8 byte expression ack(9,9) represent a truly huge number?
Ackerman considered unhelpful
... continue reading