Tech News
← Back to articles

Why JPEG XL Ignoring Bit Depth Is Genius (and Why AVIF Can't Pull It Off)

read original related products more articles

Why JPEG XL Ignoring Bit Depth Is Genius (And Why AVIF Can’t Pull It Off)

People often ask me what I mean when I say “JPEG XL is simply the best thought out and forward thinking image formats there is. Nothing else is close.” This is article is just one example of why.

When I heard that JPEG XL’s encoder doesn’t care about bit depth, it sounded almost reckless (and I was downright confused). In a world obsessed with 8-bit, 10-bit, 12-bit precision wars, shouldn’t bit depth be fundamental? Isn’t more bits always better?

Here’s the twist: ignoring bit depth isn’t a limitation. It turns out it might be a brilliant design decision for modern image compression. And it reveals a fundamental philosophical difference between JPEG XL and AVIF that has massive implications for image quality, workflow simplicity, and future-proofing.

Let me explain why this ”non-feature’ is actually a superpower.

The Problem: Bit Depth Is Just a Convention, Not Reality

When Fractional first started building the JPEG XL community site, I ran tens-of-thousands of image tests for various parts of the site. I was really confused when I forced cjxl to limited outputs of 10- or 12-bits, and the resulting file was EXACTLY the same size. So much so, I reached out to Jon (the man leading the JPEG XL charge) to point out what was clearly a bug in the implementation). You can forgive me for being confused when he said it was the expected behaviour.

Inside JPEG XL’s lossy encoder, all image data becomes floating-point numbers between 0.0 and 1.0. Not integers. Not 8-bit values from 0-255. Just fractions of full intensity.

Whether your source was an 8-bit JPEG, a 10-bit camera RAW, a 16-bit professional scan, or even 32-bit floating point data, doesn’t matter. It all maps into the same [0, 1] range. The encoder sees the meaning of those colors, not how finely they were quantized before arrival.

Think about what this means: a bit is just a file format convention, not a perceptual reality.

... continue reading